Management approach to airborne satellite communications terminal testing
Special Topics - Aerospace Industry
Allen L. Johnson, Satellite Communications Group, Wright Laboratory, Wright-Patterson AFB, Ohio
The recent Desert Storm activity demonstrated the value of instant worldwide communications and high-tech weapons delivered by air. To provide the necessary worldwide communications to the aircraft, the Air Force Systems Command (soon to be integrated into the new Air Force Materiel Command) has developed a reliable, interference-resistant, global satellite communications system. This paper explores lessons learned in testing airborne satellite communications terminals and describes benefits incorporated in testing related equipment.
CHANGES IN TEST PHILOSOPHY
With the evolution in technology from analog radios to software/firmware-intensive digital radios, the basic test philosophy has changed. The analog radio was a well-defined, relatively inert entity that could be tested with a pass/fail method. By comparison, the software-intensive radio is a chameleon that changes its characteristics before one's very eyes. Testing has become more of an extension of the development phase rather than a sell-off of the final product. The modern avionics tester is no longer given a list of specifications to test against in a go/no-go fashion. Instead, the tester has a number of options or algorithms, and the ability to fine tune these options with software/firmware changes.
The proper management of an avionics system development requires a balanced approach to testing. The balanced approach starts with mathematics modeling and analysis to identify high-risk problems early. The feasibility of the design can be checked to assure it does not violate basic physics or achievable limits of known technology. High-risk problems identified in analysis are then evaluated using computer simulation. The simulation is designed to accomplish parametric evaluations or sensitive analysis to narrow down the variables of the problem. If the solution to the problem still has a relatively high risk after simulation, it may be necessary to build a prototype system. A laboratory prototype avionics system could be evaluated under the appropriate environmental conditions such as temperature, altitude, vibration, and radio frequency interference. When the avionics system prototype has passed environmental and performance tests, it's time to evaluate the flight testing options.
Historically it has been very difficult to transition an avionics system to the System Program Office (SPO) or user without a flight test demonstration. The SPO, such as the B-2 aircraft program office, is very sensitive to cost and risk. They have, in effect, signed a fixed price contract with Congress to develop a certain number of aircraft at a specified cost. While extensive testing throughout the development phase and a well-run ground test may convince a design engineer that an avionics system is ready to transition, the SPO s looking for the low risk that only a flight test in the real airborne environment can establish. During the development of a major new satellite communications system such as Milstar, extensive in-plant engineering tests were planned to demonstrate the technical parameters of the system. However, a part of the development effort and the final phase of the reliability, availability, maintainability, and human factors demonstration was incorporated in the flight test phase to convince the SPO and user that the system was mature enough to be fielded.
COMBINED ENVIRONMENTAL FACTORS
A properly designed development includes environmental testing of the avionics system. The traditional approach to environmental testing is sequential testing. Temperature, altitude, and vibration tests are often tested independently to isolate the various environmental factors. In addition, the test scenarios do not represent the actual operational conditions to which the equipment will eventually be subjected. In the real environment, the avionics is subject to a simultaneous combination of environmental factors. This is the major cause of the large discrepancy between the reliability demonstrated during ground environmental testing and that experienced during airborne testing.
For example, an Air Force ultra-reliable airborne radio program underwent an extensive, well-controlled ground reliability test demonstrating 1,000 hours mean time between failures (MTBF). In subsequent flight tests in several different aircraft, the radio consistently demonstrated less than 100 hours MTBF. Similar experience has been documented on numerous avionics systems. A satellite communications modulator now used operationally showed 500 hours MTBF in ground tests and 25 hours MTBF in airborne developmental tests. Extensive redesign was required to correct the problems identified in the flight tests and improve the airborne reliability of the unit.
The velocity and acceleration encountered in the dynamic airborne environment are difficult to simulate. A system that works well in a slowly changing ground environment may fail disastrously in a dynamic airborne environment. A developmental model of a satellite communications modulator designed for the airborne command post worked well when tested on the ground. In flight, the computer processor overloaded (crashed) and had to be restarted five or ten times per hour. The problem was traced to rapid changes in multiple data bases occurring in flight. During dynamic maneuvers, the Doppler, antenna pointing, Inertial Navigation System (INS) inputs, and signal levels all changed rapidly. The interrupt-driven processor could not handle all the changes simultaneously. A reordering of software priorities solved the problems once the actual parameters were identified during flight testing.
In another example, an extremely high frequency (EHF) satellite communications system lost downlink signal lock at certain aircraft headings. Analysis of the flight data showed the error only occurred when the Doppler exceeded 16 KHz. While the system was designed to handle 100 KHz of Doppler on the downlink signal, the memory locations for the most significant the data bits had inadvertently been assigned for multiple functions. Reassignment of the memory solved the problem that was observed, isolated, and characterized during flight testing.
When a flight test program is undertaken, 25 to 50 percent of the information derived involves totally unexpected phenomenon.
One of the first military satellite communications systems designed for airborne use employed a triple-frequency diversity scheme to overcome the expected effect of multi-path fading of the downlink satellite signal. In this scheme, the same information was sent sequentially on three separate frequencies. A flight test evaluation of the technique showed that the multi-path fading was less severe than theoretically predicted. The test proved that the complicated and bandwidth-wasteful triple diversity scheme was not required. A much simpler scheme was employed in the production model. The discovery of the overdesign during flight testing saved millions of dollars and priceless frequency spectra in the production version.
When a flight test program is undertaken, 25 to 50 percent of the information derived involves totally unexpected phenomenon. While evaluating the advanced development satellite communications system for Milstar, the testers methodically accounted for factors contributing to the antenna pointing error: satellite ephemeris accuracy, computational error, INS noise, physical alignment, gear tolerance, servo accuracy, and INS latency. After improving the system so the pointing errors were less than two-tenths of a degree on the ground, pointing errors of one degree were still experienced when airborne. By repeating the alignment of the INS and antenna while airborne, it was proven that the fuselage deflection when airborne introduced the pointing error. These airborne tests proved the need for an airborne antenna alignment in installations where the INS and antenna were not mounted close together.
ALLOCATION OF TEST RESOURCES
The question of how to allocate the time and funds between analysis, simulation, ground test, and flight test is a difficult one, and the answers for each system were considered. However, some general guidelines are available.
The analysis phase is the least expensive part of the test and must be heavily emphasized. Brainstorming sessions are helpful in identifying many potential problems. Each potential problem must be analyzed thoroughly to identify expensive, high-risk areas. The analysis phase should continue throughout the entire test program. When simulation, ground test, or flight test identify a new problem, the problem is fed back into the analysis phase.
Most high-risk areas identified by analysis can be simulated or modeled. In those cases where a simulation may by too costly or too time consuming, make certain the problems are tested by prototyping or during ground test.
Those problems that are still high-risk following simulation are also prototyped and tested in the laboratory. Specific test objectives should be developed and laboratory tests conducted to meet those objectives. If the objective is to demonstrate a specific MTBF, then a plan to run the equipment for a certain number of hours may not be adequate. Laboratory tests must continue until there is a high confidence that the equipment will meet the required MTBF.
The flight test phase evaluates those problems that are still high risk following the ground tests. Approximately one-half of the flight test hours should be allocated to investigate unexpected problems or anomalies that will be discovered during the flight test phase.
DEVELOPMENTAL FLIGHT TESTING
While the previous discussion is centered on how to test to determine whether the avionics system meets requirements, the current and future generation of software/firmware-intensive avionics systems allow one to do rapid prototyping or development in the air. In developing an airborne microwave antenna system, several pointing algorithms were considered. When it became obvious that a clear decision between the options could not be made based solely on ground test information, an airborne test was undertaken. In a single flight it was possible to switch between the antenna pointing algorithms and compare the pointing accuracy, stability, and robustness of the competing approaches. That single flight test accomplished what a six-month analysis had failed to resolve.
CHEMISTRY OF QUALITY
The latest concepts of Total Quality Management (TQM) are a logical extension of a common sense approach to project management that many people have been practicing for years. The basis for good project management is good planning, commitment and good communications. The process starts with a clear understanding of the customer's needs.
The developer must be in close contact with the customer and listen to the customer's problems. The developer then needs to examine the problem in light of existing and future technologies to be able to determine how best to satisfy the needs. New technology isn't always the answer. It may be that a change in procurement approach or application of existing technology may solve the problem. Currently, customers complain that the avionics systems are so expensive they cannot afford to procure them. One reason the systems are so expensive is that the government only authorizes a few to be procured at a time. The contractor cannot develop cost-effective manufacturing processes because of the low quantity. A change in procurement approach to a larger initial commitment could reduce the cost significantly.
Once the needs are identified, the developer should lay out a development, test, and transition plan that will carry the effort through to the point where the customer's needs are satisfied. Be sure to exhaust all available ideas before settling on the final plan.
When the development starts, it is important to involve the people who will be responsible for the final testing in all phases of the development, including specification writing. The final development model will be difficult to test unless the proper test points are designed and built in the avionics system from the start. The building of the test team starts when the development starts. Assign responsibility and delegate the authority to produce a good product during the development and testing. It is difficult to develop teamwork unless the people feel committed and have a sense of ownership from the start.
Review the personnel evaluation criteria and make sure it has a positive rather then negative incentive system. Some organizations' evaluation criteria are the avoidance of schedule slips, overruns, or reduced product performance. In that case, the project manager will be very conservative to minimize these risks, and the product will suffer. If instead the system is designed to reward innovation and allow the managers to make honest mistakes, a positive atmosphere can be created where the sky is the limit, reasonable risks are taken, and exceptional results occur.
By involving the test team in the development process it is possible to generate anon-competitive, teamwork atmosphere where the individuals are working together to improve the proocess rather than blaming each other. To increase the probability of success it is necessary to develop a chemistry of quality where everyone from the secretary to the technician, the engineer, the designer, and the project manager realize their individual effort is important to the final outcome, and they will all share in the success or failure of the project.
The management of a flight test program is similar to the management of any major project. One should start with a clear understanding of the customer's needs—what are you trying to prove? Develop a plan that allocates resources in the most efficient way-spend the time and bucks on the real problems, not the showy ones. Assign responsibility and delegate the authority to the people doing the test—get them involved and committed to the effort. Allow innovation and allow people to make honest mistakes—provide an atmosphere where the results matter, not the process. In 25 years of developing airborne satellite communications systems, the Wright Laboratory has found that a careful mix of analysis, simulation, ground test and flight test provides avionics technology that can be successfully transitioned to the real Air Force—the fliers and fighters.
Allen Johnson currently heads the Satellite Communications Group of Wright Laboratory at Wright-Patterson AFB, Ohio. He has over thirty-two years of satellite communications experience in the area of design, development, and testing.
He is a graduate of the University of Illinois, with a graduate engineering degree from Northeastern University, and a graduate management degree from Ohio State University. He spent two years with the Bell Telephone Laboratories developing microwave devices prior to coming to work with the Air Force.
Mr. Johnson has published numerous articles in technical journals on satellite communication system development, airborne flight testing, and airborne propagation problems.
AUGUST 1992 pm network
Organizations must invest in building a culture - and project teams - that can turn cutting-edge ideas into reality, according to new PMI research.