Thursday, December 13, 2007

Configuration Status Accounting

Configuration Status Accounting of Application Software Configuration Items? What is that? Why do I need it? The Configuration Status Accounting system will provide for you a record of the following information for each application software configuration item (CI):
1. The planned and actual dates the:
  • application software specification was released;
  • allocated baseline was established;
  • application software Test Plan for the CI was released;
  • application software Test Plan for the CI was approved;
  • application software design document was released;
  • application software design document was approved;
  • application software CI testing took place;
  • application software test report for the CI was approved;
  • application software Product Baseline was established.
2. The Configuration Status Accounting system will record all change requests made to the application software specification (the allocated baseline):
  • the date of the change request;
  • the title of the change request;
  • the status of the change request;
  • the approval date of the change request; and
  • the planned and actual dates of change implementation.
This will ensure you have the latest and greatest version of your software within your archive and deployed to your customers. It also won't hurt you a bit if you have a SoX audit either :)

Software Configuration Managment: Configuration Identification of Software Configuration Items

Some good Policies for Configuration Identification of Software Configuration Items will greatly improve the success of your software archive. Suggested policies include:

1. Each application software configuration item must have a specification document, the following are recommended for discussion:
  • Business Need/Requirements Statement
  • Software Requirements Specification
  • Software Design Description
  • Source Code/Executable Code
  • Test Plan Procedure
  • Software User Manual and Run Book
  • Training Plan
  • Software Product Specification
2. Application software specifications should be contained in separate documents.

3. Each application software specification will be baselined (as part of the Allocated Baseline) and subject to formal change control.

4. Application software specifications should be traceable to the system-level requirements specification.

5. The content of the application software specification establishes the acceptance criteria for that software configuration item.

6. Each application software specification will be numbered with the configuration items identifier as part of the document number, for example, the specification for application software configuration item XYZ Project will be XYZ Project-SPEC.

7. The naming convention used applicable to application software configuration items is to have a maximum of the first six characters be the CI number; for example, XYZ Project-OPEN-WINDOW links the open window module to the XYZ Project CI.

Agile Testing Types and Process

Types of Testing and the steps to be followed in sequence to assure your customers receive a quality product. For example, performance testing can only be accomplished successfully AFTER functional testing has been performed and the software passes that step. To do so otherwise would result in functional errors delaying the success of performance testing

Step 1: Unit testing
Performed by the Developer. Unit test case design begins after a technical review approves the high level design. The unit test cases shall be designed to test the validity of the program's correctness. In other words each decision statement in the program shall take on a true value.

Step 2: Test Planning
Performed by QA. Review requirement specifications, architectural designs and use cases to develop test plans, test steps and test scenarios. Identify any gaps or issues that need to be addressed.

Step 3: Requirements Traceability / Testability
Performed by QA. Traceability is the ability to show how requirements are derived from higher level (or "parent") requirements. Conversely, traceability helps to identify all downward requirements derived from parent requirements.

Step 4: Integration Testing
Performed by QA. Integration testing proves that all areas of the system interface with each other correctly and that there are no gaps in the data flow. The final integration test proves that the system works as an integrated unit when all the fixes are complete.

Step 5: Build Verification / Smoke Test
Performed by QA. When a build has met completion criteria and is ready to be tested, the QA team runs an initial battery of basic tests to verify the build. If the build is not testable at all, then the QA rejects the build. If portions of the build are testable those are tested and the results documented.

Step 6: Functional Testing
Performed by QA. Functional testing assures that each element of the application meets the functional requirements of the business as outlined in the requirements document/functional brief, system design specification, and other functional documents produced.

Step 7: Performance, Load and Stress Testing
Performed by QA. Non-functional testing proves that the documented performance standards or requirements are met. Examples of testable standards include response time and compatibility with specified browsers and operating systems.

Step 8: Defect Fix Validation
Performed by QA. If any known defects or issues existed during development, QA tests specifically in those areas to validate the fixes implemented by the developers who have also unit tested these fixes prior to sending them over to QA for validation.

Step 9: Regression Testing
Performed by QA. Regression testing is performed after the release of each phase to ensure that there is no impact on previously released software. Regression testing cannot be conducted on the initial build because the test cases are taken from defects found in previous build in initial there will be nothing to test against.

Step 10: Error Management
Performed by QA. During the QA testing workflow, all defects will be reported using the error management workflow. Regular meetings will take place between QA, development, product and project management to discuss defects, priority of defects, and fixes.

Step 11: QA Reporting and Readiness Review Performed by QA. QA states the results of testing, reports outstanding defects/known issues, and makes a recommendation for release into production.

Step 12: Release Management
Performed by QA. Releasing software refers to the process of providing some named (or otherwise uniquely identified) files to others for use. The others may be your department at work, your classmates, or The World. Managing the release means you know, understand and can explain what went into it.

Step 13: User Acceptance Testing
Performed by All concerned Product Managers, Project Managers and a Customer Focus Group (if applicable) to verify the new system, data and software changes meet customer expectations and meet usability requirements.

Step 14: Release into production
Performed by QA & Project Team If the project team determines that the build is acceptable for production, the configuration/version management team will migrate the build into staging for the implementation team to move into the production environment.

Step 15: Post Implementation
Testing Performed by QA. Testing performed after the software has been deployed to ensure proper implementation.