HUMANOID AS SURVEILLANCE



Today is a world of new scientific invention. As this paper includes the use of humanoid robots in the field of security. There is a wide range of use of robot like in restaurant, in spying; in medical .our main objective is to bring this humanoid robot in the field of surveillance. My paper will give immense use to the army personnel those who are survellancing in the remote areas of the India. Now question arises why only humanoid robots used for the surveillance purpose? Now for this answer take the example of Mumbai terror attack the terrorist have done the damage for 72 hrs.now the N.S.G doesn’t know how many terrorist are there in hotel Taj .for this reason the robots are designed in a such a way to counter such terror activities . In my paper I have presented some modification in the robot i.e. to use the metal detector, infrared cameras for sensing in the very dim light or no light .temperature detector for detecting the human inside the building. In my paper I have used 10 mega pixel camera 360 rotation to get the clear cut view of the inside building .so this will help the army personnel to counter strike with the terrorist. There will be fewer casualties to army as well as human being. My motive for bringing this paper to you that there is a great scope of enhancement in the field of robotics for the security inside the country. The robot will also carry insurgency operation in country for countering terrorism. Human-Robot has recently received considerable attention in the academic community, in labs, in technology companies, and through the media. Because of this attention, it is desirable to present a survey of HR to serve as a tutorial to people outside the field and to promote discussion of a unified vision of HR within the field. The goal of this review is to present a unified treatment of HR-related problems, to identify key themes, and discuss challenge problems that are likely to shape the field in the near future. Although the review follows a survey structure, the goal of presenting a coherent "story" of HR means that there are necessarily some well-written, intriguing, and influential papers that are not referenced. Instead of trying to survey every paper, we describe the HR story from multiple perspectives with an eye toward identifying themes that cross applications. The survey attempts to include papers that represent a fair cross section of the universities, government efforts, industry labs, and countries that contribute to HR, and a cross section of the disciplines that contribute to the field, such as human, factors, robotics, cognitive psychology, and design. in the field of  defense it is widely used for searching and surveillance purpose because  it decreases the human casualties and other hindrance for the   army personnel.

Before Da Vinci there was Al-Jazari – the Engineering genius of the Islamic world in the middle Ages. He designed and built a number of automatas including the first programmable humanoid robot. He is also invented the Crank-shank. A 13th Century Programmable Robot team from the USA history channel was on campus last month in the Faculty of Engineering to talk about some very old robots. They were there to film a replica of the mechanism for al-Jazari’s drinking boat; a boat full of musical automata first constructed in 1206. Professor Noel Sharkey from Computer Science built the core of the device –”bogged it together from a pile of rubbish”, he says – to demonstrate how it could have been programmed. The previous claim for the world’s oldest programmable automata is for a machine built by Leonardo da Vinci in 1478.

Al-Jazari’s machine was originally a boat with four automatic musicians that floated on a lake to entertain guests at royal drinking parties. It had two drummers, a harpist and a flautist. Professor Sharkey’s machine has just the one drummer with a drum, cymbals, bells and no body. The flautist is replaced with an Irish penny whistle. He says he wouldn’t risk taking this to any drinking parties round here.
The heart of the mechanism is a rotating cylindrical beam with pegs (cams) protruding from it. These just bump into little levers that operate the percussion. The point of the model is to demonstrate that the drummer can be made to play different rhythms and different drum patterns if the pegs are moved around. In other words it is a programmable drum machine.

“Whether or not al-Jazari dynamically programmed his machines is an intriguing question”, he says, “it is quite likely that he used this method, at the very least, for fine tuning the rhythm of the musicians”. Professor Sharkey is currently looking at a much older mobile automaton device by Heron of Alexandria, 1st Century AD, which he now suspects may also have been programmable.
Today in modern era of development of robot has come in great extent. ROBOT means servant. The word has been derived from word ROBOTA.

A humanoid robot is a robot with its overall appearance, based on that of the human body, allowing interaction with made-for-human tools or environments. In general humanoid robots have a torso with a head, two arms and two legs, although some forms of humanoid robots may model only part of the body, for example, from the waist up. Some humanoid robots may also have a 'face', with 'eyes' and 'mouth'. Androids are humanoid robots built to aesthetically resemble a human. A humanoid robot is an autonomous robot because it can adapt to changes in its environment or itself and continue to reach its goal. This is the main difference between humanoid and other kinds of robots. In this context, some of the capacities of a humanoid robot may include, among others:

  • self-maintenance (like recharging itself)
  • autonomous learning (learn or gain new capabilities without outside assistance, adjust strategies based on the surroundings and adapt to new situations)
  • avoiding harmful situations to people, property, and itself
  • safe interacting with human beings and the environment
Like other mechanical robots, humanoid refer to the following basic components too: Sensing, Actuating and Planning and Control. Since they try to simulate the human structure and behavior and they are autonomous systems, most of the times humanoid robots are more complex than other kinds of robots. This complexity affects all robotic scales (mechanical, spatial, time, power density, system and computational complexity), but it is more noticeable on power density and system complexity scales. In the first place, most current humanoids aren’t strong enough even to jump and this happens because the power/weight ratio is not as good as in the human body. The dynamically balancing Dexter can jump, but poorly so far. On the other hand, there are very good algorithms for the several areas of humanoid construction, but it is very difficult to merge all of them into one efficient system (the system complexity is very high). Nowadays, these are the main difficulties that humanoid robots development has to deal with. Humanoid robots are created to imitate some of the same physical and mental tasks that humans undergo daily. Scientists and specialists from many different fields including engineering, cognitive science, and linguistics combine their efforts to create a robot as human-like as possible. Their creators' goal for the robot is that one day it will be able to both understand human intelligence, reason and act like humans. If humanoids are able to do so, they could eventually work in cohesion with humans to create a more productive and higher quality future. Another important benefit of developing androids is to understand the human body's biological and mental processes, from the seemingly simple act of walking to the concepts of consciousness and spirituality. Right now they are used for welding. In the future they can greatly assist humans by welding and mining for coal. There are currently two ways to model a humanoid robot. The first one models the robot like a set of rigid links, which are connected with joints. This kind of structure is similar to the one that can be found in industrial robots. Although this approach is used for most of the humanoid robots, a new one is emerging in some research works that use the knowledge acquired on biomechanics. In this one, the humanoid robot's bottom line is a resemblance of the human skeleton.
Enon was created to be a personal assistant. It is self-guiding and has limited speech recognition and synthesis. It can also carry things.
Humanoid robots are used as a research tool in several scientific areas.
Researchers need to understand the human body structure and behavior (biomechanics) to build and study humanoid robots. On the other side, the attempt to simulate the human body leads to a better understanding of it.

Human cognition is a field of study which is focused on how humans learn from sensory information in order to acquire perceptual and motor skills. This knowledge is used to develop computational models of human behavior and it has been improving over time.
It has been suggested that very advanced robotics will facilitate the enhancement of ordinary humans.
Although the initial aim of humanoid research was to build better orthosis and prosthesis for human beings, knowledge has been transferred between both disciplines. A few examples are: powered leg prosthesis for neuromuscular impaired, ankle-foot orthosis, biological realistic leg prosthesis and forearm prosthesis.
Besides the research, humanoid robots are being developed to perform human tasks like personal assistance, where they should be able to assist the sick and elderly, and dirty or dangerous jobs. Regular jobs like being a receptionist or a worker of an automotive manufacturing line are also suitable for humanoids. In essence, since they can use tools and operate equipment and vehicles designed for the human form, humanoids could theoretically perform any task a human being can, so long as they have the proper software. However, the complexity of doing so is deceptively great.

They are becoming increasingly popular for providing entertainment too. For example, Ursula, a female robot, sings, dances, and speaks to her audiences at Universal Studios. Several Disney attractions employ the use of animatrons, robots that look, move, and speak much like human beings, in some of their theme park shows. These animatrons look so realistic that it can be hard to decipher from a distance whether or not they are actually human. Although they have a realistic look, they have no cognition or physical autonomy.
Humanoid robots, especially with artificial intelligence  algorithms, could be useful for future dangerous and/or distant space exploration missions, without having the need to turn back around again and return to Earth once the mission is completed



THEORY SENSORS

A sensor is a device that measures some attribute of the world. Being one of the three primitives of robotics (besides planning and control), sensing plays an important role in robotic paradigms.
Sensors can be classified according to the physical process with which they work or according to the type of measurement information that they give as output. In this case, the second approach was used.


Proprioceptive Sensors


Proprioceptive sensors sense the position, the orientation and the speed of the humanoid's body and joints. In human beings inner ears are used to maintain balance and orientation. Humanoid robots use accelerometers to measure the acceleration, from which velocity can be calculated by integration; tilt sensors to measure inclination; force sensors placed in robot's hands and feet to measure contact force with environment; position sensors, that indicate the actual position of the robot (from which the velocity can be calculated by derivation) or even speed sensors.


Exteroceptive Sensors


Exteroceptive sensors give the robot information about the surrounding environment allowing the robot to interact with the world. The exteroceptive sensors are classified according to their functionality.
Proximity sensors are used to measure the relative distance (range) between the sensor and objects in the environment. They perform the same task that vision and tactile senses do in human beings. There are other kinds of proximity measurements, like laser ranging, the usage of stereo cameras, or the projection of a colored line, grid or pattern of dots to observe how the pattern is distorted by the environment. To sense proximity, humanoid robots can use sonars and infrared sensors, or tactile sensors like bump sensors, whiskers (or feelers), capacitive and piezoresistive sensors.

Arrays of tactels can be used to provide data on what has been touched. The Shadow Hand uses an array of 34 tactels arranged beneath its polyurethane skin on each finger tip.Tactile sensors also provide information about forces and torques transferred between the robot and other objects. Vision refers to processing data from any modality which uses the electromagnetic spectrum to produce an image. In humanoid robots it is used to recognize objects and determine their properties. Vision sensors work most similarly to the eyes of human beings. Most humanoid robots use CCD cameras as vision sensors. Sound sensors allow humanoid robots to hear speech and environmental sounds, and perform as the ears of the human being. Microphones are usually used for this task.


ACTUATORS

Actuators are the motors responsible for motion in the robot. Humanoid robots are constructed in such a way that they mimic the human body, so they use actuators that perform like muscles and joints, though with a different structure. To achieve the same effect as human motion, humanoid robots use mainly rotary actuators. They can be electric, pneumatic, hydraulic, piezoelectric or ultrasonic.
Hydraulic and electric actuators have a very rigid behavior and can only be made to act in a compliant manner through the use of relatively complex feedback control strategies. While electric coreless motor actuators are better suited for high speed and low load applications, hydraulic ones operate well at low speed and high load applications.
Piezoelectric actuators generate a small movement with a high force capability when voltage is applied. They can be used for ultra-precise positioning and for generating and handling high forces or pressures in static or dynamic situations.
Ultrasonic actuators are designed to produce movements in a micrometer order at ultrasonic frequencies (over 20 kHz). They are useful for controlling vibration, positioning applications and quick switching.
Pneumatic actuators operate on the basis of gas compressibility. As they are inflated, they expand along the axis, and as they deflate, they contract. If one end is fixed, the other will move in a linear trajectory. These actuators are intended for low speed and low/medium load applications. Between pneumatic actuators there are: cylinders, bellows, pneumatic engines, pneumatic stepper motors and pneumatic artificial muscles.

PLANNING AND CONTROL

In planning and control the essential difference between humanoids and other kinds of robots (like industrial ones) is that the movement of the robot has to be human-like, using legged locomotion, especially biped gait. The ideal planning for humanoid movements during normal walking should result in minimum energy consumption, like it happens in the human body. For this reason, studies on dynamics and control of these kinds of structures become more and more important.

To maintain dynamic balance during the walk, a robot needs information about contact force and its current and desired motion. The solution to this problem relies on a major concept, the Zero Moment Point (ZMP).
Another characteristic about humanoid robots is that they move, gather information (using sensors) on the "real world" and interact with it, they don’t stay still like factory manipulators and other robots that work in highly structured environments. Planning and Control have to focus about self-collision detection, path planning and obstacle avoidance to allow humanoids to move in complex environments.
There are features in the human body that can’t be found in humanoids yet. They include structures with variable flexibility, which provide safety (to the robot itself and to the people), and redundancy of movements, i.e., more degrees of freedom and therefore wide task availability. Although these characteristics are desirable to humanoid robots, they will bring more complexity and new problems to planning and control.


LEARNING AND UNDERSTANDING BIMANUAL MOVEMENTS


Scientists at Dublin City University have researched a subset of human movements called bimanual movements. At different stages of this research they have approached the problems from the novel points of view. They believe that many machine learning problems can accommodate neuroscience and perceptual aspects of human movements for learning and recognizing human behaviors.
Learning and recognizing human movements have been given great attention of researchers around the world in the recent years. A broad range of applications from medicine to surveillance and security can benefit from this technology. Learning hand movements and recognizing gestures are significant components of such technologies.

Bimanual movements in general form a large subset of hand movements in which both hands move simultaneously in order to do a task or imply a meaning. Clapping, opening a bottle, typing on a keyboard and drumming are some usual bimanual movements. Sign Languages also use bimanual movements to accommodate sets of gestures for communication.
Due to the involvement of both hands, understanding bimanual movements requires not only computer vision and pattern recognition techniques but also neuroscientific studies as a background to perceive the movements.

A cognitive system for bimanual movements learning and understanding entails three fundamental components (see Figure 1): low-level image processing to deal with sensory data, intelligent hand tracking to recognize the left hand from the right hand, and machine learning for understanding the movements.
Using a monochrome surveillance CCD camera the hands are extracted based on the hand grey-levels within the high contrast images. The second component is hand tracking, which is a significant problem due to the presence of hand-hand occlusion. When one hand covers the other partially or completely, they must be re-acquired correctly at the end of the occlusion period.

Studies in neuroscience show that the two hands are temporally and spatially coordinated in bimanual movements. In addition, the components of one hand are temporally coordinated too. These coordination’s form the basis of our algorithm to track the hands in bimanual movements.
We have taken a general view of the tracking problem to cover many challenging problems in this area. For example, from a pure pattern recognition point of view a movement can be understood differently when it is seen from different camera view directions. By defining a general set of movement models independent of view angle we have developed the tracking algorithm so that it covers almost every camera view direction. It is trained in just one direction and can be used in other directions. This makes the algorithm independent of the position of the visual system.
Using the temporal coordination’s both between limbs (the two hands) and within a limb (a hand and the fingers) the algorithm tracks the hands independent of the hand shapes even in movements where the shapes change. This is especially important from the processing speed point of view. Since processing and understanding the hands shapes is usually a time consuming process, as a component of an integrated real-time recognition system, the tracking algorithm must be fast enough to leave enough room for the other components.

The view-direction and hand-shape independence naturally lends itself to extending the concept of tracking towards mobile vision environments (eg active vision in robotics). We have developed a model to make the algorithm independent from the actual position and velocities. Consequently, it can be used in applications where the visual system (the camera) moves or turns. For example, assuming that the camera is installed on a humanoid robot, the algorithm tracks the hands of a subject while the robot walks.
The third component of the system is the recognizer. As a hierarchical cognitive system, it analyses the hand shapes at the bottom level, learns the individual partial movement of each hand at the intermediate level, and combines them at the top level to recognize the whole movement (see Figure 2). Statistical and spatio-temporal pattern recognition methods such as Principal Component Analysis and Hidden Markov Models form the bottom and intermediate levels of the system. A Bayesian inference network at the top level perceives the movements as a combination of a set of recognized partial hands movements.
The recognizer has been developed so that it learns single movements and recognizes both single and concatenated periodic bimanual movements. The concatenated periodic bimanual movements are used particularly in Virtual Reality simulators for interacting with virtual environments. A virtual spacecraft controlled by bimanual gestures is an example. In all parts of this research we have looked at the problems from the general point of view and developed general solutions. The tracking algorithm can be employed in a wide range of applications including recognition, Virtual Reality, and surveillance/security systems. The recognizer can be used in recognizing both single and concatenated periodic bimanual movements.
Our plan for the future is to make the recognition component independent from the camera view direction. This will result in a system that can recognize the movements from the view directions that has not been trained for. Results of the ongoing research in this area will open significant doors towards the general learning and understanding of human movements.


 COMPONENT ASSEMBLED IN ROBOT

WEB CAM: This web cam will be mounted on the top of the robot and it is free to rotate in 360 degrees in any direction. Webcams typically include a lens, an image sensor, and some support electronics.
Various lenses are available, the most common in consumer-grade webcams being a plastic lens that can be screwed in and out to set the camera's focus. Fixed focus lenses, which have no provision for adjustment, are also available. As a camera system's depth of field is greater for small imager formats and is greater for lenses with a large f-number (small aperture), the systems used in webcams have sufficiently large depth of field that the use of a fixed focus lens does not impact image sharpness much. Image sensors can be CMOS or CCD, the former being dominant for low-cost cameras, but CCD cameras do not necessarily outperform CMOS-based cameras in the low cost price range. Most consumer webcams are capable of providing VGA-resolution video at a frame rate of 30 frames per second. Many newer devices can produce video in multi-megapixel resolutions, and a few can run at high frame rates such as the PlayStation Eye, which can produce 320×240 video at 120 frames per second.[4]
2) METAL DETECTOR: metal detector is mounted on the lower part of robot so that it can detect all the guns and other unwanted metals used by the terrorist in the process of hostages etc.

CONCLUSION

Now today is the day of safety. Every country wants to be safe in all modes of terror attack inside or outside the country. My aim for presenting this paper is to tell you about how good these humanoid robots can carry their task without any harm to the human. It decreases the human effort and carries their job in very best manner. I would like to take notice of terror attack that has occurred in Mumbai .the N.S.G commandos has suffered great set back in their operation as all the operation is performed manually by the commandoes.

1
[1] Proceedings of the International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design.
2
[2] Proceedings of the IEEE Conference on Intelligent Transportation Systems.
3
[3] European Annual Conference on Human Decision Making and Control.
4
[4] J. A. Adams, "Human-robot interaction design: Understanding user needs and requirements," in Human Factors and Ergonomics Society 49th Annual Meeting, 2005.
5
[5] J. A. Adams and M. Skubic, "Special issue on human-robot interaction," IEEE Transactions on Systems, Man, and Cybernetics: Part A -- Systems and Humans, vol. 35, no. 4, 2005.
6
[6] AICHI, Robot Project: We Live in the Robot Age at EXPO 2005. Available from: http://www.expo2005.or.jp/en/robot/robot_project_00.html, 2005.
7
[7] J. S. Albus, "Outline for a theory of intelligence," IEEE Transactions on Systems, Man, and Cybernetics, vol. 21, no. 3, pp. 473-509, 1991.
8
[8] S. All and I. Norbaksh, "Insect telepresence: Using robotic tele-embodiment to bring insects face-to-face with humans," Automous Robots, Special Issue on Personal Robotics, vol. 10, pp. 149-161, 2001.
9
[9] R. O. Ambrose, H. Aldridge, R. S. Askew, R. R. Burridge, W. Bluethmann, M. Diftler, C. Lovchik, D. Magruder, and F. Rehnmark, "Robonaut: NASA's space humanoid," IEEE Intelligent Systems and Their Applications, vol. 15, no. 4, pp. 57-63, 2000.

No comments:

Post a Comment

leave your opinion