关联性传感器处理(Correlated-sensor processing)是一项前沿且日趋变得复杂的尖端技术。 当设计者把更多的处理能力置于其中时,传感器变得更加智能了。由于使用了更多的传感器使系统能够应付各种更加复杂的应用,从高端到低端,各种系统都正在从这一以传感器为主的设计趋势中受益。
By Robert Cravotta
Correlated-sensor processing is a leading-edge and increasingly sophisticated technology. Sensors are becoming smarter as designers place more processing capabilities into them. All types of applications—from high-end to low-end systems—are benefiting from using a richer set of sensors to enable them to handle more complex usage scenarios.
As the cost of microprocessors and sensors continues to drop, autonomous and semiautonomous systems can incorporate more intelligence and make more optimal decisions based on a better understanding of their internal condition and the immediate environment surrounding them. Adding sensors and the intelligent processing to correlate the data from all those sensors to a design incurs a higher design-time cost and complexity, but design teams are increasingly accepting this cost because the trade-off can be a differentiated system that more efficiently delivers more capabilities for prices similar to those of previous designs.
Using sensors in embedded designs is not new. What is changing, though, is that designs are incorporating increasingly more sensors and processors—from high-end autonomous systems to mass-produced consumer appliances. The threshold for replacing mechanical-control structures keeps shifting as sensors and processors continue to drop in price. How companies are partitioning processing and correlating multiple sensors in the same system is going far beyond just mechanical replacement—so much so that vendors often consider their approach as proprietary information because choosing the right mix of sensors and processing algorithms can enable a lower-cost bill of materials, better energy efficiency, and better system performance.
Events such as the DARPA (Defense Advanced Research Projects Agency) Urban Challenge and Grand Challenge demonstrate extremes of how much sensor-rich, fully autonomous vehicles can sense, interpret, predict, interact with, and move within their environment (Reference 1). The challenges showcase autonomous automobiles that can drive and navigate entirely on their own without a remote control or human driver by relying solely on various onboard sensors and positioning systems (Reference 2). However, in these challenges, their handlers fed the autonomous vehicles a list of goal GPS (global-positioning-system) positions to which they could navigate. They did not decide where to go or in what order to visit the points; rather, they worked out how they should get to each point on the list from their current position.
The DARPA Urban Challenge, which took place on Nov 3, 2007, required autonomous vehicles to drive in urban-road-traffic conditions with other manned and unmanned vehicles while demonstrating that they could safely perform complex maneuvers, such as merging, passing, parking, and negotiating intersections. Six teams successfully completed the DARPA Urban Challenge trial. The winning vehicle, from the Tartan Racing team, employed seven lidar (light-detection-and-ranging), radar (radio-detection-and-ranging), and vision sensors in addition to the inertial GPS/IMU (inertial-measurement-unit) sensor (Reference 3). The choice of sensors supported fusion of the data for the planning algorithm and provided some overlap between the sensors for redundancy and correlation of the data.
Robots constitute a growing category of sensor-rich autonomous systems. For example, Boston Dynamics features a number of robots, such as the remote-controlled BigDog, that can navigate and recover while traversing difficult terrains, including icy patches, based on their sensors and onboard-control systems. BigDog’s sensors for locomotion include joint position, joint force, ground contact, ground load, a laser gyroscope, and a stereovision system. Additional sensors that focus on the internal health of the system monitor the hydraulic pressure, oil temperature, engine temperature, rotations per minute, and battery charge. Vendor iRobot also produces many types of robots, including consumer-level vacuums, such as the Roomba. The Roomba uses multiple IR (infrared) sensors either directly or with mechanical paddles to sense its environment (Reference 4).
Semiautonomous systems are a growing area for sensor-rich designs. On the high end are fly-by-wire aircraft and automobiles, and on the low end substantial growth is occurring in consumer appliances, such as washing machines. A semiautonomous system accepts some high-level direction from a human operator but is responsible for managing the low-level operational details of the system it monitors and controls. With a broad enough interpretation, most embedded systems fall into this category, and the designers of these systems may benefit from the lessons they learn from other sensor- or data-rich designs.
Complex remote-controlled systems, such as BigDog, must autonomously respond to their immediate environment and condition, partially because a remote-control interface is insufficient from a data-bandwidth and feedback-interface perspective for an operator to order the myriad adjustments for the system to behave properly. When a design team chooses to implement a semiautonomous subsystem, it should perform its task as well as, faster than, or better than most operators could manually perform that same task.
The semiautonomous fly-by-wire flight-control system for aircraft replaces the physical control between the pilot and the aircraft with an electrical interface. The control system receives the pilot’s commands and then determines how best to exercise the actuators, based on its own sensor readings, at each control point to optimally perform the desired behavior. In this case, the smarter control system enables the pilot to focus on the high-level control of the aircraft while the flight-control system manages the low-level control of each of the subsystems; this approach frees up valuable cognitive cycles for the pilot to focus on those environmental issues for which the flight-control system cannot compensate.
Automobiles are increasingly employing this same split in high- and low-level control between the driver and the onboard-control subsystems to make them safer and more efficient (Reference 5). Examples of autonomous subsystems within automobiles include antilock-braking systems; electronic-stability control; traction control; yaw control; and collision-mitigation systems, such as intelligent restraint systems and air bags. Depending on the circumstances, the driver may be unaware of these control systems.
The automotive-engine-management system represents a sensor-rich, semiautonomous embedded subsystem (Figure 1). In addition to monitoring the pedal interface with the driver, the system tracks many other internal data points, such as temperature, pressure, and chemical composition of the air, fuel, and exhaust within the system, and performs further correlations with other sensors measuring the spark, knock, and crankshaft position to optimize engine-power output, fuel efficiency, emissions performance, and driving experience, or even to accommodate alternative fuels.
False positives
In addition to the considerations for design and implementation costs, a semiautonomous embedded-control system’s ability to resolve ambiguous and undefined conditions limits the scope of such systems. An “enhanced” system greatly diminishes in value if it requires too much operator intervention because it issues too many false warnings or because it might damage the end system. The decrease in reliability for the electronic systems in Mercedes automobiles in the early 2000s exemplifies how a system’s inadequate resolution of ambiguous or undefined conditions can negatively impact the value of the whole system.
In this case, correlating more sensor data is enabling semiautonomous embedded-control systems to safely take on more complex decisions because they are increasingly able to adequately identify and avoid acting on false-positive conclusions. Collision detection on a high-end automobile can rely on many sensors operating in concert, such as long- and short-range radar, IR, video, inertial, and ultrasonic sensors, to detect and validate the necessary actions to a potential or imminent collision. Each of these sensors provides information about the surrounding environment that the control system can partially correlate with the data from the other sensors to fill in the blind spots of each type of sensor to avoid undesirable decisions—such as deploying an air bag when a pebble hits the bumper of the vehicle.
A growing class of warning subsystems in automobiles that provide warning assistance, such as for lane-departure and blind-spot detection, interacts directly with the driver to provide information or assistance. These warning and response systems rely on correlating multiple sensor inputs to avoid issuing false alarms or responding incorrectly to a condition. For example, a lane-departure-detection subsystem could correlate data among visual, inertial, wheel-position, and steering-column-position sensors before issuing a warning to the driver so as to avoid issuing false alarms.
As vendors resolve the cost and design-complexity issues of using more sensors and processing intelligence in embedded-control systems, designers are incorporating more sophisticated autonomous-control systems in lower-cost designs, including midrange consumer applications. Priyabrata Sinha, principal applications engineer at Microchip, points out that appliances are stepping outside the state-machine box and adding more sensors and intelligence into the decision loop.
For example, contemporary washing machines can use three microcontrollers to manage the system and the user interface (Figure 2). An interesting thing to note is that the amount of flash memory—not the processor architecture’s size—is the most significant difference between the example two- and six-sensor designs. The extra memory allows the system to incorporate additional code for the new sensors and allows the program code to correlate the inputs that the more complex control algorithm uses.
A key area of opportunity for “fast”-response systems is how designers pair and connect sensor processors, says Ritesh Tyagi, senior product/segment manager at Renesas Technology America. A midrange-priced refrigerator may use as many as eight microcontrollers with the appropriate set of localized sensors to provide custom and optimal control for each of the refrigerator stations, such as the meat and vegetable drawers. These types of implementations are striking a balance between centralized and distributed processing to provide better reliability, meet an aggressive power budget, and simplify the user interaction with the appliance.
Unfortunately, the approach of using multiple types of sensory data and correlating them together in the control algorithms is a sensitive proprietary topic for many companies. However, the few high-level examples here might provide you with the inspiration to explore whether additional ways exist for you to collect sensory information and correlate it with the other information in the system to make a better design that can more efficiently perform new value-added functions at lower overall cost.