Software process assessment wiki
Userpage tools. This page was last edited on 14 July , at What is ITIL? ITIL Processes. ITIL Glossary. ITIL 4 vs. ITIL V3. ITIL Templates. Software life cycle models that specify discrete software processes, with rigorously specified entry and exit criteria and prescribed boundaries and interfaces, should be recognized as idealizations that must be adapted to reflect the realities of software development and maintenance within the organizational context and business environment.
Another practical consideration: software processes such as configuration management, construction, and testing can be adapted to facilitate operation, support, maintenance, migration, and retirement of the software.
Additional factors to be considered when defining and tailoring a software life cycle model include required conformance to standards, directives, and policies; customer demands; criticality of the software product; and organizational maturity and competencies. Other factors include the nature of the work e.
This topic addresses software process assessment models, software process assessment methods, software process improvement models, and continuous and staged process ratings. Software process assessments are used to evaluate the form and content of a software process, which may be specified by a standardized set of criteria. Capability evaluations are typically performed by an acquirer or potential acquirer or by an external agent on behalf of an acquirer or potential acquirer.
The results are used as an indicator of whether the software processes used by a supplier or potential supplier are acceptable to the acquirer. Performance appraisals are typically performed within an organization to identify software processes in need of improvement or to determine whether a process or processes satisfies the criteria at a given level of process capability or maturity.
Process assessments are performed at the levels of entire organizations, organizational units within organizations, and individual projects. Assessment may involve issues such as assessing whether software process entry and exit criteria are being met, to review risk factors and risk management, or to identify lessons learned.
Process assessment is carried out using both an assessment model and an assessment method. The model can provide a norm for a benchmarking comparison among projects within an organization and among organizations. A process audit differs from a process assessment. Assessments are performed to determine levels of capability or maturity and to identify software processes to be improved.
Audits are typically conducted to ascertain compliance with policies and standards. Audits provide management visibility into the actual operations being performed in the organization so that accurate and meaningful decisions can be made concerning issues that are impacting a development project, a maintenance activity, or a software-related topic. Success factors for software process assessment and improvement within software engineering organizations include management sponsorship, planning, training, experienced and capable leaders, team commitment, expectation management, the use of change agents, plus pilot projects and experimentation with tools.
Additional factors include independence of the assessor and the timeliness of the assessment. Software process assessment models typically include assessment criteria for software processes that are regarded as constituting good practices. These practices may address software development processes only, or they may also include topics such as software maintenance, software project management, systems engineering, or human resources management.
A software process assessment method can be qualitative or quantitative. Qualitative assessments rely on the judgment of experts; quantitative assessments assign numerical scores to software processes based on analysis of objective evidence that indicates attainment of the goals and outcomes of a defined software process.
For example, a quantitative assessment of the software inspection process might be performed by examining the procedural steps followed and results obtained plus data concerning defects found and time required to find and fix the defects as compared to software testing. A typical method of software process assessment includes planning, fact-finding by collecting evidence through questionnaires, interviews, and observation of work practices , collection and validation of process data, and analysis and reporting.
Process assessments may rely on the subjective, qualitative judgment of the assessor, or on the objective presence or absence of defined artifacts, records, and other evidence.
The activities performed during a software process assessment and the distribution of effort for assessment activities are different depending on the purpose of the software process assessment. Software process assessments may be undertaken to develop capability ratings used to make recommendations for process improvements or may be undertaken to obtain a process maturity rating in order to qualify for a contract or award.
The goal of a software process assessment is to gain insight that will establish the current status of a process or processes and provide a basis for process improvement; performing a software process assessment by following a checklist for conformance without gaining insight adds little value.
Software process improvement models emphasize iterative cycles of continuous improvement. A software process improvement cycle typically involves the subprocesses of measuring, analyzing, and changing. The Plan-Do-Check-Act model is a well-known iterative approach to software process improvement. Improvement activities include identifying and prioritizing desired improvements planning ; introducing an improvement, including change management and training doing ; evaluating the improvement as compared to previous or exemplary process results and costs checking ; and making further modifications acting.
The Plan-Do-Check-Act process improvement model can be applied, for example, to improve software processes that enhance defect prevention. Software process capability and software process maturity are typically rated using five or six levels to characterize the capability or maturity of the software processes used within an organization.
A continuous rating system involves assigning a rating to each software process of interest; a staged rating system is established by assigning the same maturity rating to all of the software processes within a specified process level. A representation of continuous and staged process levels is provided in Table 8. Continuous models typically use a level 0 rating; staged models typically do not. In Table 8. At level 1, a software process is being performed capability rating , or the software processes in a maturity level 1 group are being performed but on an ad hoc, informal basis.
At level 2, a software process capability rating or the processes in maturity level 2 are being performed in a manner that provides management visibility into intermediate work products and can exert some control over transitions between processes.
At level 3, a single software process or the processes in a maturity level 3 group plus the process or processes in maturity level 2 are well defined perhaps in organizational policies and procedures and are being repeated across different projects. Level 3 of process capability or maturity provides the basis for process improvement across an organization because the process is or processes are conducted in a similar manner.
This allows collection of performance data in a uniform manner across multiple projects. At maturity level 4, quantitative measures can be applied and used for process assessment; statistical analysis may be used. At maturity level 5, the mechanisms for continuous process improvements are applied. Continuous and staged representations can be used to determine the order in which software processes are to be improved.
In the continuous representation, the different capability levels for different software processes provide a guideline for determining the order in which software processes will be improved. In the staged representation, satisfying the goals of a set of software processes within a maturity level is accomplished for that maturity level, which provides a foundation for improving all of the software processes at the next higher level. This topic addresses software process and product measurement, quality of measurement results, software information models, and software process measurement techniques see Measurement in the Engineering Foundations KA.
Before a new process is implemented or a current process is modified, measurement results for the current situation should be obtained to provide a baseline for comparison between the current situation and the new situation.
For example, before introducing the software inspection process, effort required to fix defects discovered by testing should be measured. Following an initial start-up period after the inspection process is introduced, the combined effort of inspection plus testing can be compared to the previous amount of effort required for testing alone. Similar considerations apply if a process is changed.
Software process and product measurement are concerned with determining the efficiency and effectiveness of a software process, activity, or task. The efficiency of a software process, activity, or task is the ratio of resources actually consumed to resources expected or desired to be consumed in accomplishing a software process, activity, or task see Efficiency in the Software Engineering Economics KA.
Effort or equivalent cost is the primary measure of resources for most software processes, activities, and tasks; it is measured in units such as person-hours, person-days, staffweeks, or staff-months of effort or in equivalent monetary units—such as euros or dollars. Software processes, methodologies and frameworks range from specific prescriptive steps that can be used directly by an organization in day-to-day work, to flexible frameworks that an organization uses to generate a custom set of steps tailored to the needs of a specific project or group.
There are many development life cycle models that have been developed in order to achieve different required objectives. The models specify the various stages of the process and the order in which they are carried out.
The most used, popular and important SDLC models are given below:. The waterfall model is a breakdown of project activities into linear sequential phases, where each phase depends on the deliverables of the previous one and corresponds to a specialisation of tasks. The approach is typical for certain areas of engineering design. The V-model represents a development process that may be considered an extension of the waterfall model and is an example of the more general V-model.
Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing.
The horizontal and vertical axes represent time or project completeness left-to-right and level of abstraction coarsest-grain abstraction uppermost , respectively. The incremental build model is a method of software development where the model is designed, implemented and tested incrementally a little more is added each time until the product is finished.
It involves both development and maintenance. The product is defined as finished when it satisfies all of its requirements. Each iteration passes through the requirements, design, coding and testing phases. And each subsequent release of the system adds function to the previous release until all designed functionally has been implemented. This model combines the elements of the waterfall model with the iterative philosophy of prototyping.
Nontechnical controls include security policies, administrative actions, and physical and environmental mechanisms. Both technical and nontechnical controls can further be classified as preventive or detective controls.
As the name implies, preventive controls attempt to anticipate and stop attacks. Examples of preventive technical controls are encryption and authentication devices.
Detective controls are used to discover attacks or events through such means as audit trails and intrusion detection systems. Assess the probability that a vulnerability might actually be exploited, taking into account the type of vulnerability, the capability and motivation of the threat source, and the existence and effectiveness of your controls.
Rather than a numerical score, many organizations use the categories high, medium and low to assess the likelihood of an attack or other adverse event.
Using the risk level as a basis, determine the actions that senior management and other responsible individuals must take to mitigate the risk. Here are some general guidelines for each level of risk:. The final step in the risk assessment process is to develop a risk assessment report to support management in making appropriate decisions on budget, policies, procedures and so on. For each threat, the report should describe the corresponding vulnerabilities, the assets at risk, the impact to your IT infrastructure, the likelihood of occurrence and the control recommendations.
Here is a very simple example:. You can use your risk assessment report to identify key remediation steps that will reduce multiple risks. For example, ensuring backups are taken regularly and stored offsite will mitigate the risk of accidental file deletion and also the risk from flooding.
Each of these steps should have the associated cost and should deliver real benefit in reducing the risks. Remember to focus on the business reasons for each improvement implementation. As you work through this process, you will get a better idea of how the company and its infrastructure operates and how it can operate better.
Then you can create risk assessment policy that defines what the organization must do periodically annually in many cases , how risk is to be addressed and mitigated for example, a minimum acceptable vulnerability window , and how the organization must carry out subsequent enterprise risk assessments for its IT infrastructure components and other assets.
Always keep in mind that the information security risk assessment and enterprise risk management processes are the heart of the cybersecurity. These are the processes that establish the rules and guidelines of the entire informational security management, providing answers to what threats and vulnerabilities can cause financial harm to our business and how they should be mitigated.
Office Office Exchange Server. Not an IT pro? United States English. Post an article. Subscribe to Article RSS.
0コメント