Software Quality Criteria And How We Can Digitize Them
What Are Software Quality Criteria And How Can We Digitize Them?
The software world is growing more and more every day, and it is becoming a sector where both end-users and software manufacturers are more popular. Thousands of software products are on the market every day and they continue to disappear at the same speed. Software storage and versioning systems make room for new software every day, as well as increasingly decreasing programs. Why is there so much software out there and why is it that not all of them last longer?
Many methodologies or methods can be preferred in software development processes. These choices can generally be based on the experience of the software team, how previous projects were carried out, or the dominant decision of the team management. However, not all methods can be used for every project. Finding out how to choose the more correct method and, as a result, achieving the desired long-lasting result will perhaps help us to successfully finish the project before it starts. Obtaining better quality and longer-lasting software should be the central problem and target of every project.
Keywords: Quality, Software Development, Quality Metrics, Methodology
There are many different definitions of quality. For some it is the “capability of a software product to conform to requirements.” while for others it can be synonymous with “customer value”. However, software development takes the definition in another light. When speaking of software quality, it takes into account the delivery of the requirements, including;
- Functional requirements
- Non-functional requirements
Some of the more typical functional requirements include:
- Business Rules
- Transaction corrections, adjustments, and cancellations
- Administrative functions
- Authorization levels
- Audit Tracking
Some typical non-functional requirements are:
- Performance – Response Time, Throughput..etc.
Quality Criteria Metrics
In order to ensure that software is long-lasting, hassle-free, and appeals to the end-user, in short, it is of high quality. Since these criteria we have determined will of course be interpreted differently by everyone, basing them on certain numerical calculations will form a basis for the analysis of the criteria in a more meaningful way.
2.1 Inner Metrics
Although inner metrics are not direct, they can be evaluated in conjunction with other metrics and play a role in revealing the quality values of a project. At the same time, they will be an important performance/cost/quality criterion for determining the correct methodology that is the target or for examining the method used in the project.
Number Of Teams
The success and the quality of the work done are included in the metric list as it will directly affect the number of people in the team compared to the size of the project
Number Of Developers
Although there are people who take different roles in the team, the people who will do the core part of the work will be the developers for the software project, so it will have a certain effect on the quality of the project.
Time Of Project
Timing the project correctly will leave space for project stakeholders to find more accurate methods and better quality solutions during the development process.
Cost Of Project
Particularly low budget allocated in proportion to the size and scope of the project will result in the fact that the stakeholders who will do the work to be done do not pay much attention to the project and its quality will decrease.
Mean Time To Failure
Mean Time To Failure (MTTF) is a very basic measure of reliability used for non-repairable systems. It represents the length of time that an item is expected to last in operation until it fails.
MTTF mean is actually lifetime of any product or a device. Its value is calculated by looking at a large number of the same kind of items over an extended period of time and seeing what is their mean time to failure. MTTF is calculated as the total hours of operation, divided by the total number of items being tracked.
Mean Time Between Failures (MTBF)
MTBF, bir sistemin önceki bir arızası ile günlük çalışma döneminde bir sonraki arızanın arasında geçen tahmini süreyi ölçer. Daha basit bir ifadeyle MTBF, bir varlığın bir sonraki sorundan önce ne kadar süre çalışabileceğini tahmin etmenize yardımcı olur.
Mean Up Time
It refers to the time elapsed until an error occurs again after an error that we received on the system was corrected. It will allow us to find out how long the system has survived.
Failure In Time
It is a unit that shows the failure rates and a few faults every 109 hours. As a calculation method, we can apply the following mathematical formula: 1 / MTBF * 109
Mean Time To Repair (MTTR)
Mean Time To Repair (MTTR) refers to the time it takes to repair a system and restore it to full functionality.
MTTR can be found by dividing the total maintenance time by the total number of maintenance operations in a given time period.
Mean Time To Recovery (MTTR)
MTTR (mean time to recovery or mean time to restore) is the average time it takes to recover from a product or system failure. From the time the system or product fails to the time that it becomes fully operational again.
Mean Time to recovery is calculated by adding up all the downtime in a specific period and dividing it by the number of incidents. So, let’s say our systems were down for 30 minutes in two separate incidents in a 24-hour period. 30 divided by two is 15, so our MTTR is 15 minutes.
Mean Time To Production (MTTP)
Measures the elapsed time (in hours) from merging a change in gitlab-org/gitlab projects master branch, to deploying that change to gitlab.com. It serves as an indicator of our speed capabilities to deploy application changes into production.
Recent efforts by the release manager rotation have achieved MTTP under 24 hours.
The probability to perform required functions without failure under defined conditions for a defined period of time.
As a calculation method, we can apply the following mathematical formula:
MTTP / (MTTF + MTTE)* 100%
Lead time is a general measure of the time from product concept to final delivery. It will depend on the complexity of the project and the number of engineers working on the project, and both will affect the project cost. By tracking project delivery times, developers can better predict time to market for existing and similar future projects.
The time between the definition of a new feature and its availability to the user. It helps you estimate how well your team is performing so far.
Production (MTTR + MTTF)
Production is an interrupt analysis and includes the average time between failures (MTBF) and the average recovery / repair time (MTTR). These measure how well the software performs in a production environment
The cycle time, which is part of the delivery time, is the time it takes to make a desired change in the software and put it into production. If a team uses DevOps and uses continuous integration, continuous delivery (CI / CD) applications, they can usually measure cycle time in minutes instead of months.
Productivity measures the amount of code that provides business value. A developer who has created a completely new solution or implements extensive code changes will be interested in numerous trial and error methods with low efficiency. However, it is likely that an engineer who has made many minor changes with a low error rate will have a higher efficiency rate.
Churn is the percentage of time that developers spend on editing, adding, or deleting their own code. High code clutter indicates rerun and can mean something is wrong during the development process.
As a mathematical sample; added 100 lines of code. Also removed or changed 20 of those lines, code churn is 20%. 20% of the code written provided no benefit to users.
We can set the following metrics as High / Low / Medium or digitize their contribution with numerical values.
– Completion Rate
– Satisfaction Level
– Effectiveness Of Marketing Info
– Interfaces Understandability
– Understandability Of I/O
– Ease of component learning
– Contents of marketing info
– Contents of error message
– Interfaces Density
– Effectiveness Of Help System
– Size Of Help System
– Contents Of Help System
– Contents Of Demos
– Effectiveness Of Manuals
– Size Of Manuals
– Contents Of Manuals
Release Window Accuracy
Given that you have either a planned date to ship your product, for every day that you’re late, score that as a -1 and for every day that you’re early, score that as a +1. Your current accuracy is the total number of days late or early that you are shipping. If you are very accurate, you’ll have a 0, but positive numbers are better than negative numbers.
We can explain the Feature Adoption feature as follows. It is the recognition of a feature we have created on the system by the users and the calculation of its use.
Mathematically; % of users exposed to the feature * % of exposed users having successful outcomes.
Let’s say 1000 users notice our new feature out of a total active user base of 10000; 500 complete a successful action;
(1000 exposed / 10000 total) * (500/1000) =
0.1 * 0.5 = 0.05 = 5%
Net Promoter Score
Net Promoter Score is the calculated results of surveying your user base with a simple question;
“How likely is it that you would recommend this product to your colleagues, friends, and family?”
Negative : 6 or less
Passive : 7 or 8.
Positive : 9 or 10.
Delivered Customer Value
To measure Customer Value, we recommend using regular team polls to gauge the perception of value delivered to the customer, from 0 to 10;
“How much value did we deliver to our customers with this product”
Team polls can be, on a 1–10 scale, with 10 being the most prepared, “How prepared were you for this past sprint”. Sprint Preparedness can be measured at the end of an iteration heading into a retrospective, or it can be measured after planning, as you look forward to the upcoming sprint.
These are the metrics that are determined to measure whether all the developed features, functions of a system are fully performed and whether they are compatible with the desired ones. Here, compatibility and development according to the desired will show us that a higher quality process has been passed and more attention has been paid to the overall quality of the software.
We can set the following metrics as High / Low / Medium or digitize their contribution with numerical values.
– Is All Data Defined
– Is All Functions Referenced Defined
– Are All Defined Functions Used
– Are the Conditions of All Decision Points Determined
– Are All Reported Problems Resolved
– Is It Compatible With Those Who Want Design
The following criteria can be considered for easy maintenance and understandability of the codes of the project. These metrics not only directly affect the quality of the project, but also indirectly affect the quality of the error rate, up time, down time … etc. by directly affecting other criteria metrics.
We can set the following metrics as 1 / 0 with numerical values.
– Logical Data Model
– Data Model
– Lines Of Codes
– Activity Diagram
– ER Diagram
– User Manual
– Use Case Diagram
– Class Diagram
Error intensity, an important software test metric, helps the team determine the total number of errors found in a software over a period of time or development. The results are then divided by the size of the module in question, which allows the team to decide whether the software is ready for the version or whether it requires further testing. The error density of a software is counted in a thousand lines of code.
Error Density = Error Count / Module Size
This is a type of metric that is evaluated during the software development lifecycle (SDLC) process. The flaw category metric provides an idea of the software’s different quality features such as usability, performance, functionality, stability, reliability, and more. In short, the error category is a feature of defects based on the quality features of the software product and is measured with the help of the formula:
Defect Category = Defects of a certain category / Total number of defects
Defect Count [ Low | Medium | High | Critical ]
The depth and number of defects in the software project will directly affect the quality as it will prevent testing and successful test results. For this reason, we can determine the number of defects according to the condition of the defects and rate them according to the depth and draw a test metric result.
Average Time to Correct Errors
With the help of this formula, team members can determine the average time it takes to correct errors by the development and testing team.
Total time / Number of errors for bug fixes
Percentage of Errors Accepted
The focus here is on defining the total number of flaws accepted by the development team. These are also measured in percent. The lower the total number of errors, the more smoothly and towards the desired result, it may indicate that an application has been developed.
(Valid errors / Total error reported) x 100
Percentage Fixed Error
With the help of this metric, the team can determine the corrected error percentage. This will show that the higher the percentage of errors, the more issues have been fixed.
(Error fixed / Total number of errors reported) x 100
Fail Test Status Coverage
Measures the percentage of all failed test cases. The low number of failed test scenarios may indicate that the project has achieved the desired goal and has a quality result according to its content.
(Number of failed tests / Total number of test cases) x 100
Successful Test Cases Coverage
Measures the percentage of successful test cases. The success or the least errors of the test scenarios will show that the desired result can be achieved and the quality of the project according to its content.
(Number of successful tests / Total number of tests performed) x 100
Test coverage is an important metric that determines the scope of all functions of the software product. Indicates that test activities have been completed and can be used as criteria for finalizing the test. It can be measured by applying the following formula:
Scope of Test = Number of detected errors / number of predicted errors.
Review efficiency is a metric used to reduce pre-delivery flaws in the software. Examination errors can be found in documents as well as in documents. By applying this metric, it reduces costs as well as efforts used in the process of correcting or resolving errors. It also helps reduce the likelihood of an error leak in the later stages of the test and verifies the test status effectiveness. The formula for calculating review efficiency:
Review Efficiency (RE) = Total number of inspection errors / (Total number of inspection errors + Total number of test errors) x 100
The most common metric for measuring complexity is The Halstead Complexity Metrics.
The Halstead complexity measures use the inputs of total and distinct operators and operands to compute the volume, difficulty, and effort of a piece of code. Difficulty, which is the (number of unique operators / 2) * (total number of operands/number of unique operands), is tied to the ability to read and understand the code for tasks such as learning the system or performing a code review. Again, you can count this on a system level, a class level, or a method/function level.
As the main of these metrics, we can determine the contribution of the following to quality
– Program Length
– Time required to program
– Number of delivered bugs
Degree of effectiveness and efficiency with which a product or system can be successfully installed and/or uninstalled in a specified environment. It is a necessity for the whole system and the software itself to be easily installed and removed in all environments it will work on.
The degree to which a product or system can effectively and efficiently be adapted for different or evolving hardware, software or other operational or usage environments. When we add a new module to a system instead of replacing it, this module/software is expected to adapt very easily.
The degree to which a product can replace another specified software product for the same purpose in the same environment. It is important that the software is compatible with other systems to which it will be connected, that it must be portable and integrated.
Easy To Change
A high score indicates that the module can evolve independently. A module be replaced or renewed when necessary, the rest of the system should not be affected and the inter-module interconnection should not affect the whole system.
Module can more easily be plugged into other releases. As applications upgrade, the features used in the previous version should easily be upgraded. The upper version should be easy to use again due to the logic that includes the lower version.
Lower scores indicate that it’s easier to test all possible configurations of a module. Because we will not have to integrate it to be able to use it again in different places and applications. This means that I need an easy and simple structure for us.
Creating quality software and maintaining the continuity of this software is the life goal of all software companies and teams. Over the years, software that is constantly struggling with problems and having to be renewed/maintained more than necessary will always end up being out of use.
We can list much more than what we told in our study as quality criteria. The main purpose of these criteria, which can be used as a checklist, is that the criteria can be converted into metrics and digitized into an interpretable format. In this way, different projects can be compared with each other and their advantages and weaknesses can be determined.
We wish a much longer-lasting and high-quality projects to be achieved with this study, which will help us see where and how many problems are in the projects.
- E. Al-Qutaish R.(2010). Quality Models in Software Engineering Literature: An Analytical and Comparative Study, Journal of American Science 6
- Vijayasarathy, L. ve Butler, C. (2015).Choice of Software Development Methodologies: Do Organizational, Project, and Team Characteristics Matter? Published in IEEE Software ( Volume: 33 , Issue: 5 , Sept.-Oct. 2016 )
- Kamalraj R., Geetha B.G., Singaravel G.(2009), Reducing Efforts on Sofware Project Management using Software Package Reusability, Published in IEEE International Advance Computing Conference, Patiala, India (2009, Pages 1624-1627)