These biases include changing priorities after the start of a project or not having any clear definitions of "success". Each quality sub-characteristic e. An attribute is an entity which can be verified or measured in the software product. Attributes are not defined in the standard, as they vary between different software products. Software product is defined in a broad sense: it encompasses executables, source code, architecture descriptions, and so on.
|Published (Last):||21 December 2007|
|PDF File Size:||13.9 Mb|
|ePub File Size:||6.5 Mb|
|Price:||Free* [*Free Regsitration Required]|
ISO Standard A Proposal for an ISO Proving properties of National bodies that are members of ISO or IEC participate in the development of International Standards through technical committees established by the respective organization to deal with particular fields of technical activity. Other international organizations, governmental and non-governmental, in liaison with ISO and IEC, also take part in the work.
Draft International Standards adopted by the joint technical committee are circulated to national bodies for voting. The metrics listed in this International Technical Report are not intended to be an exhaustive set. Developers, evaluators, quality managers and acquirers may select metrics from this technical report for defining requirements, evaluating software products, measuring quality aspects and other purposes. They may also modify the metrics or use metrics which are not included here.
This report is applicable to any kind of software product, although each of the metrics is not always applicable to every kind of software product. Internal metrics measure the software itself, external metrics measure the behaviour of the computer-based system that includes the software, and quality in use metrics measure the effects of using the software in a specific context of use.
This International Technical Report contains: I. Some attributes may have a desirable range of values, which does not depend on specific user needs but depends on generic factors; for example, human cognitive factors.
This International Technical Report can be applied to any kind of software for any application. Users of this International Technical Report can select or modify and apply metrics and measures from this International Technical Report or may define application-specific metrics for their individual application domain. Intended users of this International Technical Report include: Acquirer an individual or organization that acquires or procures a system, software product or software service from a supplier ; Evaluator an individual or organization that performs an evaluation.
Conformance There are no conformance requirements in this TR. References 1. ISO , Information technology, vocabulary They are also listed in annex D. Symbols and Abbreviated Terms The following symbols and abbreviations are used in this International Technical Report: 1. These give methods for measurement, assessment and evaluation of software product quality. They are intended for use by developers, acquirers and independent evaluators, particularly those responsible for software product evaluation see Figure 1.
Internal metrics provide the users with the ability to measure the quality of the intermediate deliverables and thereby predict the quality of the final product. This allows the user to identify quality issues and initiate corrective action as early as possible in the development life cycle. The external metrics may be used to measure the quality of the software product by measuring the behaviour of the system of which it is a part.
The external metrics can only be used during the testing stages of the life cycle process and during any operational stages. The measurement is performed when executing the software product in the system environment in which it is intended to operate. The quality in use metrics measure whether a product meets the needs of specified users to achieve specified goals with effectiveness, productivity, safety and satisfaction in a specified context of use. This can be only achieved in a realistic system environment.
User quality needs can be specified as quality requirements by quality in use metrics, by external metrics, and sometimes by internal metrics. These requirements specified by metrics should be used as criteria when a product is evaluated.
It is recommended to use internal metrics having a relationship as strong as possible with the target external metrics so that they can be used to predict the values of external metrics.
However, it is often difficult to design a rigorous theoretical model that provides a strong relationship between internal metrics and external metrics. Therefore, a hypothetical model that may contain ambiguity may be designed and the extent of the relationship may be modelled statistically during the use of metrics. Additional detailed considerations when using metrics are given in Annex A of this International Technical Report.
The following information is given for each metric in the table: a Metric name: Corresponding metrics in the internal metrics table and external metrics table have similar names. NOTE: In some situations more than one formula is proposed for a metric..
Function size, Source size , Time type e. Elapsed time, User time , Count type e. Number of changes, Number of failures.
Target audience: Identifies the user s of the measurement results. Metrics Tables The metrics listed in this clause are not intended to be an exhaustive set and may not have been validated. Metrics, which may be applicable, are not limited to these listed here. Additional specific metrics for particular purposes are provided in other related documents, such as functional size measurement or precise time efficiency measurement.
NOTE: It is recommended to refer a specific metric or measurement form from specific standards, technical reports or guidelines. Metrics should be validated before application in a specific environment see Annex A. Readers of this International Technical Report are invited to provide feedback.
Any changes identified during life cycle must be applied to the requirement specifications before using in measurement process. Count the number of functions changed added, modified, or deleted during development life cycle phase, then compare with the number of functions described in the requirement specifications.
Count the number of functions that have implemented the accuracy requirements and compare with the number of functions with specific accuracy requirements. Count the number of data items that meet the requirements of specific levels of precision and compare to the total number of data items with specific level of precision requirements.
The closer to 1, the more complete. The closer to 1, the more correct. Count the number of access types that are being logged correctly as in the specifications and compare with the number of access types that are required to be logged in the specifications.
Data encryption How complete is the Count the number of implementation of data implemented instances of encryption? Count the number of items requiring compliance that have been met and compare with the number of items requiring compliance as in the specification. Count the number of detected faults in review and compare it to the number of estimated faults to be detected in this phase.
NOTE: 1. Test adequacy How much of the required test cases are covered by the test plan? Count the number of test cases planned and compare it to the number of test cases required to obtain adequate test coverage. Verification Developers Validation Problem resolution Requirers Joint review Maintainers Incorrect operation avoidance How many functions are implemented with incorrect operations avoidance capability?
Count the number of implemented functions to avoid critical and serious failures caused by incorrect operations and compare it to the number of incorrect operation patterns to be considered.
Incorrect sequence of operation NOTE: Fault tree analysis technique may be used to detect incorrect operation patterns. Count the number of implemented restoration requirements and compare it to the number of restoration requirements in the specifications. Count the number of implemented restoration requirements meting target restoration time by calculations or simulations and compare it to the number of restoration requirements with specified target time. Count the number of items requiring compliance that have been met and compare with the number of items requiring compliance as in the specification..
Design Source code Review report 18? It should be possible for the measures taken to be used to establish acceptance criteria or to make comparisons between products. This means that the measures should be counting items of known value.
Results should report the mean value and the standard error of the mean 8. Internal understandability metrics assess whether new users can understand:? Learnability is strongly related to understandability, and understandability measurements can be indicators of the learnability potential of the software. Operability metrics can be categorised by the dialogue principles in ISO ?
This is particularly important for consumer products. NOTE 1: This indicates whether potential users will understand the capability of the product after reading the product description. Demonstration What proportion of functions requiring capablity demonstration have demonstration capability? NOTE: : Three metrics are possible: completeness of the documentation, completeness of the help facility or completeness of the help and documentation used in combination.
NOTE: : Status includes progress monitoring. Questionnaire to users Questionnaire to assess the attractiveness of Assessment the interface to users, taking account of classification attributes such as colour and graphical design. To measure efficiency, the stated conditions should be defined, i. When citing measured time behavior values the reference environment should be referred. Measure type Input to measurement Known operating system.
Estimated time in system calls. Estimate the response time based on this. Evaluate the efficiency of handling resources in the system. Make a factor based upon the application calls to the system in handling the resources.
Verification Developers Joint review Requirers 28? Estimate the response time to complete a group of related tasks based on this. What is the estimated memory size that the product will occupy to complete a specified task? What is the density of messages relating to memory utilization in the lines of code responsible in making system calls? Estimate the memory requirement. Verificatio Developers n Memory utilization message density Count the number of error messages pertaining to memory failure and warnings and compare it to the estimated number of lines of code responsible in system calls.
Verificatio Developers n 30? Design Source code Review report? Value B comes from requiremen t specificatio ns. Readiness of diagnostic function How thorough is the provision of the diagnostic functions. Count the number of implemented diagnostic functions as specified and compare it to the number of diagnostic functions required in specifications. Note: This metric is also used to measure failure analysis capability and causal analysis capability.
ISO/IEC TR 9126-3:2003
- LATEIN DEKLINATIONSTABELLEN PDF
- TUTE CABRERO ROBERTO COSSA PDF
- GROW IT BY CHICORO PDF
- DESCARGAR TORTORA ANATOMIA Y FISIOLOGIA 11 EDICION PDF
- AORTITIS SIFILITICA PDF
- ASKEP ULKUS PEPTIKUM PDF
- FOTOGRAFIA EN LA PLAYA DE EMILIO CARBALLIDO PDF
- INSTALLATION ET CONFIGURATION DE SNMP SOUS UBUNTU PDF
- KRONIKI AMBERU PDF