IT departments at different agencies and organizations across the government each have ways of conducting their testing and evaluation activities. In the eyes of the U.S. Department of Defense, Independent Verification & Validation (IV&V) is an independent system assessment that analyzes and tests the target system to 1) ensure that it performs its intended functions correctly, 2) ensure that it performs no unintended functions, and 3) measure its quality and reliability.
In the federal IT world it is often asked, “What is the difference between verification and validation?” Simply put, verification ensures the software product is built correctly while validation ensures the right software product is built. Verification and validation intend to improve the quality of the software during its life cycle, not afterward, and must be performed as the software is being developed. Federal organizations requiring a very high level of accuracy in the estimation, design, construction, execution, and management of their IT programs have long used some form of independent verification and validation to assure software quality. This process is sometimes used internally as a “sanity check.”
IV&V teams are independent of the development organization on a technical, managerial, financial, and contractual basis, but have well-established, working relationships with the development organization. Early this year, the U.S. Department of Education published an IV&V handbook that stated:
The IV&V team will generate the test plans, test designs, test cases, and test procedures in preparation for IV&V testing. This independent testing will complement rather than duplicate the development team’s testing.
As a former Naval Sea System Command (NAVSEA) test engineer, the IV&V teams I had the pleasure of working alongside, 1 or 2 FTEs, conducting three primary test events when ensuring the software product was ready to move forward in the software acquisition life cycle. The team makeup is different depending on the software being developed, resource capacity, and organizational experience but throughout the entire DoD, it’s customary to conduct three unique tests before the software goes into production:
Depending on the organization’s capacity, experience, and system being developed, some IV&V teams conduct Interface testing but this is usually done at system-level integration.
Regression testing is defined as any type of testing that seeks to uncover new bugs or defects in existing functional and non-functional areas of a system. Regression testing is typically conducted after changes such as enhancements, patches or configuration changes may have changed the behavior of the system.
Functional testing is a combination of quality assurance and software testing. Functional testing both “verifies a program by checking it against design document(s) and requirement(s)/specification(s)” (formally the quality assurance process), and by “validating the software against the published user or system requirements” (Software Testing).
Non-Functional testing is the testing of non-functional requirements of a software application and includes a lot of mini-tests. Defense organizations typically focus on five non-functional tests: Stress, Endurance, Performance, Security, and Usability.
*System and Software are used interchangeably
WANT TO LEARN MORE? CONTACT US TODAY