iMouse System - Engineering Seminar


iMouse System
ABSTRACT
Incorporating the environment-sensing capability of wireless sensor networks into video-based surveillance systems can provide advanced services at a lower cost than traditional surveillance systems. The integrated mobile surveillance and wireless sensor system (iMouse) uses static and mobile wireless sensors to detect and then analyze unusual events in the environment.
Wireless sensor networks (WSN) provide an inexpensive and convenient way to monitor physical environments. Integrating the context-aware capability of WSN into surveillance systems is an attractive direction. An integrated mobile surveillance and wireless sensor (iMouse) system, consists of a large number of inexpensive static sensors and a small number of more expensive mobile sensors. The former, is to monitor the environment, while the latter can move to certain locations and takes more advanced actions. The iMouse system is a mobile, context-aware surveillance system. 


INTRODUCTION
The remarkable advances of micro sensing micro electromechanical systems (MEMS) and wireless communication technologies have promoted the development of wireless sensor networks. A WSN consists of many sensor nodes densely deployed in a field, each able to collect environmental information and together able to support multihop ad-hoc routing. WSNs provide an inexpensive and convenient way to monitor physical environments. With their environment-sensing capability, WSNs can enrich human life in applications such as healthcare, building monitoring, and home security.

A wireless sensor network (WSN) is a wireless network consisting of spatially distributed autonomous devices using sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants, at different locations. The development of wireless sensor networks was originally motivated by military applications such as battlefield surveillance. However, wireless sensor networks are now used in many civilian application areas, including environment and habitat monitoring, healthcare applications, home automation, and traffic control.

The applications for WSNs are many and varied. They are used in commercial and industrial applications to monitor data that would be difficult or expensive to monitor using wired sensors. They could be deployed in wilderness areas, where they would remain for many years (monitoring some environmental variables) without the need to recharge/replace their power supplies. They could form a perimeter about a property and monitor the progression of intruders (passing information from one node to the next). 

Typical applications of WSNs include monitoring, tracking, and controlling. Some of the specific applications are habitat monitoring, object tracking, nuclear reactor controlling, fire detection, traffic monitoring, etc. In a typical application, a WSN is scattered in a region where it is meant to collect data through its sensor nodes.
A sensor node, also known as a mote, is a node in a wireless sensor network that is capable of performing some processing, gathering sensory information and communicating with other connected nodes in the network. The main components of a sensor node are microcontroller, transceiver, external memory, power source and one or more sensors.

Microcontroller: Microcontroller performs tasks, processes data and controls the functionality of other components in the sensor node. 
Tranceiver: The functionality of both transmitter and receiver are combined into a single device know as transceivers are used in sensor nodes. Transceivers lack unique identifier. The operational states are Transmit, Receive, Idle and Sleep. 
External Memory: From an energy perspective, the most relevant kinds of memory are on-chip memory of a microcontroller and FLASH memory - off-chip RAM is rarely if ever used. Flash memories are used due to its cost and storage capacity. Memory requirements are very much application dependent. 
Power Source: Power consumption in the sensor node is for the Sensing, Communication and Data Processing. More energy is required for data communication in sensor node. Energy expenditure is less for sensing and data processing. Power is stored either in Batteries or Capacitors. Batteries are the main source of power supply for sensor nodes. Current sensors are developed which are able to renew their energy from solar or vibration energy. Two major power saving policies used are Dynamic Power Management (DPM) and Dynamic Voltage Scaling (DVS). DPM takes care of shutting down parts of sensor node which are not currently used or active. DVS scheme varies the power levels depending on the non-deterministic workload. By varying the voltage along with the frequency, it is possible to obtain quadratic reduction in power consumption.

Sensors: Sensors are hardware devices that produce measurable response to a change in a physical condition like temperature and pressure. Sensors sense or measure physical data of the area to be monitored. The continual analog signal sensed by the sensors is digitized by Analog-to-Digital converter and sent to controllers for further processing.  Characteristics and requirements of Sensor node should be small size, consume extremely low energy, operate in high volumetric densities, be autonomous and operate unattended, and be adaptive to the environment. As wireless sensor nodes are micro-electronic sensor device, can only be equipped with a limited power source of less than 0.5Ah and 1.2 V. Each sensor node has a certain area of coverage for which it can reliably and accurately report the particular quantity that it is observing.
Traditional surveillance systems typically collect a large volume of videos from wallboard cameras, which require huge computation or manpower to analyze. Integrating WSNs’ sensing capability into these systems can reduce such overhead while providing more advanced, context-rich services. For example, in a security application, when the system detects an intruder, it can conduct in-depth analyses to identify the possible source. 

Integrated mobile surveillance and wireless sensor system (iMouse) consists of numerous static wireless sensors and several more powerful mobile sensors. The benefits of iMouse include the following:
It provides online real-time monitoring. For example, when the system is capturing events, the static sensors can immediately inform users where the events are occurring, and the mobile sensors can later provide detailed images of these events.

It’s event-driven, in the sense that only when an event occurs is a mobile sensor dispatched to capture images of that event. Thus, iMouse can avoid recording unnecessary images when nothing happens. 

The more expensive mobile sensors are dispatched to the event locations. They don’t need to cover the whole sensing field, so only a small number of them are required.

It’s both modular and scalable. Adding more sophisticated devices to the mobile sensors can strengthen their sensing capability without substituting existing static sensors. 

Because mobile sensors run on batteries, extending their lifetime is an important issue. So a dispatch problem is proposed that addresses how to schedule mobile sensors to visit emergency sites to conserve their energy as much as possible. If the number of emergency sites is no larger than the number of mobile sensors, the problem can be transformed to a maximum matching problem in a bipartite graph; otherwise, the emergency sites are grouped in to clusters so that one mobile sensor can efficiently visit each cluster.

RELATED WORK IN WIRELESS SURVEILLANCE
Traditional visual surveillance systems continuously videotape scenes to capture transient or suspicious objects. Such systems typically need to automatically interpret the scenes and understand or predict actions of observed objects from the acquired videos. For example, a video-based surveillance network in which an 802.11 WLAN card transmits the information that each video camera captures.

Researchers in robotics have also discussed the surveillance issue. Robots or cameras installed on walls identify obstacles or humans in the environment. These systems guide robots around these obstacles. Such systems normally must extract meaningful information from massive visual data, which requires significant computation or manpower. 

Some researchers use static WSNs for object tracking. These systems assume that objects can emit signals that sensors can track. However, results reported from a WSN are typically brief and lack in-depth information. Edoardo Ardizzone and his colleagues propose a video-based surveillance system for capturing intrusions by merging WSNs and video processing techniques. The system complements data from WSNs with videos to capture the possible scenes with intruders. However, cameras in this system lack mobility, so they can only monitor some locations. 

Researchers have also proposed mobilizers to move sensors to enhance coverage of the sensing field and to strengthen the network connectivity. The integration of WSNs with surveillance systems has not well addressed, which led to propose the iMouse system.

SYSTEM DESIGN
System Architecture
The three main components of the iMouse system architecture are:

Static sensors
Mobile sensors 
External server. 

The following steps show the operations that are performed in figure1.

(1)  The user issues commands to the network through the server.
(2)  Static sensors monitor the environment and report events.
(3)  When notified of an unusual event, the server notifies the user and dispatches mobile                                              sensors.
(4)  The mobile sensor moves to the emergency sites and collect data.
(5)  The mobile sensor report back to the server after collecting data.

The static sensors form a WSN to monitor the environment and notify the server of unusual events. Each static sensor comprises a sensing board and a mote for communication. In our current prototype, the sensing board can collect three types of data: light, sound, and temperature. We assume that the sensors are in known locations, which users can establish through manual setting, GPS, or any localization schemes.

An event occurs when the sensory input is higher or lower than a predefined threshold. Sensors can combine inputs to define a new event. For example, a sensor can interpret a combination of light and temperature readings as a potential fire emergency. To detect an explosion, a sensor can use a combination of temperature and sound readings. Or, for home security, it can use an unusual sound or light reading. To conserve static sensors’ energy, event reporting is reactive.

Mobile sensors can move to event locations, exchange messages with other sensors, take snapshots of event scenes, and transmit images to the server. As Figure 2 shows, each mobile sensor is equipped with a Stargate processing board, which is connected to the following:

A Lego car, to support mobility;
A mote, to communicate with the static sensors;
A web cam, to take snapshots; and
An IEEE 802.11 WLAN card, to support high-speed, long-distance communications, such as transmitting images.
The Stargate controls the movement of the Lego car and the web cam.

The external server provides an interface through which users can obtain the system status and issue commands. It also maintains the network and interprets the meanings of events from sensors. On detecting a potential emergency, the server dispatches mobile sensors to visit emergency sites to obtain high-resolution images of the scene. The dispatch algorithm also runs on the server.

System operations and control flows
On receiving the server’s command, the static sensors form a treelike network to collect sensing data. Suppose static sensors A and C report unusually high temperatures, which the server suspects to indicate a fire emergency in the sensors’ neighborhoods.

The server notifies the users and dispatches mobile sensors to visit the sites. On visiting A and C, the mobile sensors take snapshots and perform in-depth analyses. For example, the reported images might indicate the fire’s source or identify inflammable material in the vicinity and locate people left in the building. 

Algorithm of static sensor
Each static sensor runs the algorithm in Figure 3. The server periodically floods a tree-maintenance message to maintain the WSN. It also records each static sensor’s location and state, which is initially set to normal. Tree maintenance messages help the static sensors track their parent nodes. To distinguish new from old messages, tree-maintenance messages are associated with unique sequence numbers. The goal is to form a spanning tree in the WSN.

When a sensor receives an input above a threshold, indicating an event, the sensor reports that event to the server. To avoid sending duplicate messages, each sensor keeps a variable event flag to indicate whether it has already reported that event. When a sensor detects an event and the event flag is false, the sensor reports that event and sets the flag to true. The server collects multiple events and assigns them to mobile sensors in batches. When a mobile sensor visits an event site, it asks the local sensor to clear its event flag.

Mobile sensor dispatch and traversal problems
Because mobile sensors are battery powered, we assign them to emergency sites to conserve their energy as much as possible. Specifically, we consider a set L of m emergency sites to be visited by a set S of n mobile sensors, where each site must be visited by one mobile sensor. We allow an arbitrary relationship between m and n. The goal is to maximize the mobile sensors’ total remaining energy after sites are visited.

Our dispatch solution depends on the relationship of m and n. When m <= n, we can convert the problem to one of finding a maximum matching in a weighted bi- partite graph G = (S U L, S X L), where the vertex set is S U L and the edge set is the product S X L = {(si , lj)| si E S, lj E L}. We set the weight of (si , lj) to ei – emv X d(si , lj), where ei is the current energy of si;  emv is the energy cost for a mobile sensor to move by one unit; and d(si , lj) is the distance from si’s current location to lj . The solution is the maximum matching P of G, which we can find through traditional maximum-weight matching solutions. Alternatively, we can set our objective to minimizing mobile sensors’ total moving distances. We can also use maximum-matching to achieve this by setting the weight of (si , lj) to –emove  X d(si , lj).

When m > n, some mobile sensors must visit multiple sites. To solve this problem, we divide emergency sites into n clusters (for example, by the classical K-means method) and assign each group to one mobile sensor. In this case, each mobile sensor’s cost will include moving to the closest site in each group and then traversing the rest of the sites one by one. Given a set of locations to be visited, we can use a heuristic to the traveling salesman problem to determine the traversal order.

IMPLEMENTATION
Our static sensors are MICAz motes. A MICAz is a 2.4-GHz, IEEE 802.15.4-compliant module allowing low power operations and offering a 250-Kbps data rate with a direct sequence spread-spectrum (DSSS) radio.

The Stargate processing platform consists of a 32-bit, 400-MHz Intel PXA-255 XScale reduced-instruction- set computer (RISC) with a 64- Mbyte main memory and 32-Mbyte extended flash memory. It also has a daughterboard with an RS-232 serial port, a PCMCIA slot, a USB port, and a 51-pin extension connector, which can be attached to a mote. It drives the webcam through a USB port and the IEEE 802.11 WLAN card through its PCMCIA slot. The Stargate controls the Lego car via a USB port connected to a Lego infrared tower, as Figure 2 shows. An infrared ray receiver on the front of the Lego car receives commands from the tower, and two motors on the bottom drive the wheels.

Navigating a mobile sensor or robot is difficult without some auxiliary devices. David Johnson and colleagues used wallboard cameras to capture mobile sensors’ locations, while Jang-Ping Sheu and his colleagues suggested using signal strength to do so.

Our current prototype uses the light sensors on the Lego car to navigate mobile sensors. We stick different colors of tape on the ground, which lets us easily navigate the Lego car on a board. In our prototyping, we implemented an experimental 6 X 6 grid-like sensing field, as Figure 4 shows. Black tape represents roads, and golden tape represents intersections. We constructed the system by placing two mobile sensors and 17 static sensors on the sensing field. For static sensors, a light reading below 800 watts simulates an event, so we cover a static sensor with a box to model a potential emergency.


We use a grid-like sensing field and a grid-like static sensor deployment only for ease of implementation. In general, the static WSN’s topology can be irregular. 

Three factors affect the mobile sensors’ dispatch time:

The time that a mobile sensor takes to cross one grid unit (about 26 centimeters),
The time that a mobile sensor takes to make a 90-degree turn, and
The time that a mobile sensor takes to make snapshots and report the results.

In our current prototype, the times are 2.5, 2.2, and 4.0 seconds, respectively.

USER INTERFACE AT EXTERNAL SERVER
At the external server, users monitor the system’s status and control mobile sensors through a user interface, as Figure 6 shows. 

The user interface includes six major components:

The configure area lets users input system configuration information, such as mobile sensors’ IP addresses, ports, and sensors’ positions. 

The system-command area provides an interface to let users control the overall system, such as issuing a tree-maintenance message, adjusting the WSN’s topology, and connecting and disconnecting a specified mobile sensor.

The sensor-status area shows the current status of a static sensor being queried.

The action-control area lets users control the mobile sensors’ actions, including movement and taking snapshots.

The monitor area shows the WSN’s network topology and the mobile sensors’ patrolling paths. When a sensor detects an event, a fire icon appears in the corresponding site.

The log area displays some of the system’s status messages.

CONCLUSION
The proposed iMouse integrates WSN technologies into surveillance technologies to support intelligent mobile surveillance services. On one hand, these mobile sensors can help improve the weakness of traditional WSNs that they only provide rough environmental information of the sensing field. By including mobile cameras, we can obtain much richer context information to conduct more in-depth analysis. On the other hand, surveillance can be done in an event-driven manner. Thus, the weakness of traditional surveillance systems can be greatly improved because only critical context information is retrieved and proactively sent to users. 
The prototyped iMouse system can be improved/extended in several ways. First, the way to navigate mobile sensors can be further improved. For example, localization schemes can be integrated to guide mobile sensors instead of using color tapes. Second, the coordination among mobile sensors, especially when they are on-the-road, can be exploited. Third, how to utilize mobile sensors to improve the network topology deserves further investigation.

1 comment:

leave your opinion