System Life Cycle - Development to Evaluation

System Life Cycle - Development to Evaluation

This is second and concluding part of System Life Cycle.  We will see what goes into the development, testing, implementation, documentation and evaluation phases.

What are the development stages and testing strategies that are adopted by System Analysts?

The following are the some of the development and testing strategies adopted by System Analysts:

  1. As any system will always contain data, the same needs to be stored in files or database tables. So the file structure needs to be finalised at this stage, i.e., the type of data to be stored in each field, length of each field, which field will be the key field and how the various data files will be linked, etc. Once the file structure is determined it is created and tested fully to make sure it is robust when the system actually goes live.
  2. Since it is important that the correct data is stored in files there are certain techniques that need to be adopted to make sure the data populating the files/database is of the right type and that it conforms to certain rules. So, validation routines and verification methods are developed and used to ensure this happens. Again, these routines have to be fully tested to ensure they do trap unwanted data but also to make sure that any data transferred from a paper based system to an electronic system has been done accurately.
  3. Further, any system being developed will have some form of user interface. The types of hardware have already been considered; how these are used to actually interface with the final system now needs to be identified. For example, how the screens (and any other input devices) will be used to collect the data and the way the output will be presented. If specialist hardware is needed (for example, for people with disabilities) then it will be necessary to finalise how these devices are used with the system when it is implemented. This will be followed by thorough testing to ensure that the user screens are user-friendly and that the correct output is associated with the inputs to the system.
  4. Last but not the least, in the modern world all the systems are integrated with each other so data from one system moves seamlessly to another system. E.g. Human Resource (HR) system needs to be integrated with Payroll system so that the data from the HR system moves smoothly into the Payroll system, at scheduled intervals for generation of payslips during the payroll cycle. So, the integration touch points (data items) also needs to be considered and developed and tested thoroughly.

How to test the system ?

A computer system is generally developed in modular form. This allows the software to be broken down into smaller parts known as modules. Each part is developed separately by a programmer (or team of programmers) and is then tested to see if it functions correctly. Any problems resulting from the testing require the module to be modified and then tested again.

Once the development of each module is completed, the whole system needs to be tested i.e., all modules functioning together. Even though each module may work satisfactorily, when they are all put together there may be data clashes or incompatibility, memory issues, etc.

All of this may lead to a need to improve the input and output methods, file/ database structures, validation and verification methods, etc., and then test everything fully again. It is a very time-consuming process but it has to be as perfect as possible before the system goes live.

A test plan is usually written whilst the system is being developed. The test plan will contain every single thing that needs to be tested. For example,

  1. Does the system open or close properly?
  2. Can be data be entered?
  3. Can data be saved?
  4. When something goes wrong does an error appear?
  5. Is invalid data rejected? etc.,

A typical test plan contains:

  1. Details of what is being tested.
  2. The test data to use.
  3. What is expected to happen when test is performed.

When is the system tested?

Tested is normally done in two stages:

The first phase is done by the analysts and engineers usually before the system is delivered to the users. This testing is known as alpha testing.

The second phase of testing is done when the system is delivered and installed. This testing is known as beta testing.

What are the testing criteria?

There are three criteria for testing:

Normal data values

This is data that could be normally entered in the system. The system should accept it, process it and we can check the results that are output to make sure they are correct. For example, in the range of 1-20,'6','7','19','18','15' are normal data values.

Extreme data values

Extreme values are normal data values. However, the values are to be at the absolute limits of the normal range. Extreme values are used in testing to make sure that all normal values will be accepted and processed correctly. For example, in the range of 1-20, ‘1’ and ’20’ are the extreme data values.

Abnormal data values

This is data that should not normally be accepted by the system. The values are invalid, hence the system should reject any abnormal values. Abnormal values are used in testing to make sure that invalid data does not break the system. For example, for Month number 1 to 12 are normal, while 13, 1.5, -1 or 0, etc are abnormal data values and should not be accepted and an error message shown.

Live Data

Once a system has been fully tested, it is then tested with live data, this is data with known outcomes. Live data is entered in the new system and the results are compared with the old system results. Further modifications to the software may be needed following this testing procedure.

What are the various methods of implementation?

Once a system is fully tested and certified to go live, it is fully implemented. All existing data in paper files or electronic files are transferred to the new system.

There are four methods of implementation:

  1. Direct changeover
  2. Parallel running
  3. Phased implementation
  4. Pilot implementation/running

Direct Changeover
In this method, old system is stopped completely and the new system is started. All of the data that used to be input into the old system, now goes into the new system.

Advantages

  1. Takes minimum time and effort.
  2. Faster way of implementation.
  3. We only have to pay, one set of workers who are working on the new method.
  4. The new system is up and is run immediately.
  5. The benefits are immediate.
  6. Less likelihood of malfunction since the new system will have been fully tested.

Disadvantages

  1. If the new system fails, there is no backup system, hence data can be lost.
  2. All the staff need to be trained before the change, which may be hard to plan.

Parallel Running

In this method, the new system is started, but the old system is kept running simultaneously for a while. All of the data that is input into the old system, is also input into the new system.

The old system is run alongside the new system for a period of time, until, all the people involved with the new system are happy that it is working as desired. The old system is then switched off and all the work is done entirely in the new system.

Advantages

  1. If anything goes wrong with the new systems, the old system will act as a backup.
  2. The ‘outputs’ from the old and new system can be compared to check that the new system is running correctly or not.
  3. It allows time for the staff to be trained gradually.
  4. If the new system fails, then no data or information is lost.

Disadvantages

  1. Entering data into two systems and running two systems simultaneously takes a lots of extra time and effort.
  2. It is very expensive than direct as we need to pay for both the staff monitoring the systems.

Phased Implementation

In this method, the new system is introduced in phases i.e., step by step, gradually replacing parts of the old system until, eventually the new system has taken over.

Advantages

  1. Allows user to gradually get used to the new system.
  2. Staff training can be done in stages.
  3. If the new system fails, you only need to go back to the latest phase and do not have to review the whole system.
  4. It is possible to ensure the system works properly before expanding.
  5. Only need to pay for the work to be done once.

Disadvantages

  1. It can take long time before the whole system is implemented.
  2. There is a cost in evaluating each phase, before implementing the rest.
  3. It is only suitable for systems consisting of standalone modules.

Pilot Running

In this method, the new system is first of all, piloted (trial run) in one part of the business or organisation. For example, in just one department or section. Once the pilot system is running successfully, the new system is introduced to all the parts of the organisation.

Advantages

  1. All features of the new system can be fully run on trial.
  2. If something goes wrong with the new system, only one part of the organisation is affected.
  3. The staff who were part of the the pilot system, can help to train the other staff.
  4. There is plenty of time available to train the staff.
  5. The implementation is on a smaller and more manageable scale.

Disadvantages

  1. For the department working on the pilot, there is no backup system if it malfunctions.
  2. It can take a long time to implement the system across the whole organisation.
  3. If the system fails in one of the sections, data can be lost.

Do a compare of the costs, input requirements and risk of failure of all the four changeover methods of system implementation.

Changeover method Relative costs of each method Input needed by the user Input needed by systems team Impact of failure of method
Parallel High High Low Low
Pilot Medium Low Medium Low
Phased Medium Medium Medium Medium
Direct Low Medium Low1 High

Low1 if successful, otherwise very high because of amount of input needed

What is documentation and what are the types of documentation?

Documentation is a list of instructions to use the system.

There are two types of documentation:

User Documentation
The user documentation is intended to help the user of the system. The users are usually non-technical people who do not need to know how the system works. They just need to know how to use the system. User documentation usually includes:

  1. List of minimum hardware and software required to use the system.
  2. How to start and stop the system.
  3. How to save files.
  4. How to do a search.
  5. How to sort data.
  6. How to do print outs.
  7. How to add, amend or delete records.
  8. The purpose of the system/program/software package.
  9. Limitations of the system.
  10. How to log in/log out of the system.
  11. Examples of inputs and outputs (both on screen and print layouts).
  12. Sample runs with results and actual test data used.
  13. Explanation of any error messages that might be shown.
  14. A troubleshooting guide/help lines/FAQs.
  15. Tutorials.
  16. Glossary of terms.

Technical Documentation
Technical documentation is designed to help programmers/analysts to make improvements to the system or to repair/maintain the system. These people need to know exactly how the system works.

Technical documentation usually includes:

  1. Details of the hardware and software required for the system.
  2. Minimum memory requirements.
  3. Known 'bugs' in the system.
  4. Program listing/coding.
  5. Programming language used.
  6. Program flowcharts/algorithms.
  7. Purpose of the system/program/software.
  8. Limitations of the system.
  9. Details of data structures (data type, field name, meaning, etc).
  10. List of variables used and their meaning/description.
  11. Details of expected inputs.
  12. Output formats.
  13. Details of validation checks.
  14. Flow charts describing how the system works.
  15. Meaning of error messages.
  16. Details of how data is processed.
  17. Diagrams showing how data moves through the system.
  18. Sample runs with results and actual test data used.

What is Evaluation?

Evaluation is the stage where the system is completed with the objectives which were set by the Analysts in the analysis stage.

The output from the old and new systems are compared. The new system is considered to be a success if the objectives of this system are met.

How to evaluate the system?

The System Analyst will use a number of techniques to evaluate the system:

  1. Compare the final solution with the original requirements.
  2. Identify any limitations in the system.
  3. Identify any necessary improvements that need to be made.
  4. Evaluate the users' responses to using the new system.
  5. Compare the results from the new system with results from the old system.
  6. Compare the performance of the new system with the performance of the old system.
  7. Observe users performing set tasks, comparing old with new.
  8. Measure the time taken to complete the task, comparing old with new.
  9. Interview users to gather responses about how well the new system works.
  10. Give out questionnaires to gather responses about the ease of the new system.
  11. Seeing if cost efficiencies have been made.

Based on the evaluation two things might happen:

  1. Update of hardware because:
    • Of feedback from end-users.
    • New hardware comes on the market, necessitating change.
    • Changes within the company require new devices to be added or updated.
  2. Update of software because:
    • Of feedback from end-users.
    • Changes to the company structure or how the company works that may require modifications to the software.
    • Changes in legislation that may require modifications to the software.

This is the end of this guide. Hope you enjoyed it! Thanks for using www.igcsepro.org! We hope you will give us a chance to serve you again! Thank you!