Advanced Learning Methodologies - Seminar Report


Advanced Learning Methodologies
INTRODUCTION
 The developments made in the field of Computer Science brought new lights into the life of man. Just as the Mechanical machines like the Spinning jenny, Steam Engine, crane, motor car helped him the Mechanical aspects doing things which he himself cannot do (a Crane could lift 1000 Kg of  load where as even 10 persons struggle to do that), the Computer helped Man in mental aspects that involved Tedious calculations. Fantasies about structures that could aid man in thinking aspects like decision making, unguided implementation etc, led to the development of Artificial Intelligence. Artificial Neural networks comprise one of the key branches of Artificial Intelligence.

An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well. Adjusting the Synaptic weights of the ANNs is something like changing the value of a Potentiometer setting until the desires output is reached.
Many important advances have been boosted by the use of inexpensive computer emulations. Neural network simulations appear to be a recent development. However, this field was established before the advent of computers, and has survived at least one major setback and several eras. The first artificial neuron was produced in 1943 by the neurophysiologist Warren McCulloch and the logician Walter Pits. But the technology available at that time did not allow them to do too much. Currently, the neural network field enjoys a resurgence of interest and a corresponding increase in funding. Neural Networks pose promising applications in the fields like Pattern recognition, Space Applications, classifications, Predictive Algorithm Generations etc.

DISTINGUISHING FEATURES:
Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations.

Other advantages include:

  • Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience. 
  • Self-Organization: An ANN can create its own organization or representation of the information it receives during learning time. 
  • Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability. 
  • Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage. 

Neural networks do not perform miracles. But if used sensibly they can produce some amazing results.

NEURAL NETWORKS VERSUS CONVENTIONAL COMPUTERS
Neural networks take a different approach to problem solving than that of conventional computers. Conventional computers use an algorithmic approach i.e. the computer follows a set of instructions in order to solve a problem. Unless the specific steps that the computer needs to follow are known the computer cannot solve the problem. That restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve. But computers would be so much more useful if they could do things that we don't exactly know how to do.
Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable.
On the other hand, conventional computers use a cognitive approach to problem solving; the way the problem is to solved must be known and stated in small unambiguous instructions. These instructions are then converted to a high level language program and then into machine code that the computer can understand. These machines are totally predictable; if anything goes wrong is due to a software or hardware fault.
                     Neural networks and conventional algorithmic computers are not in competition but complement each other. There are tasks are more suited to an algorithmic approach like arithmetic operations and tasks that are more suited to neural networks. Even more, a large number of tasks, require systems that use a combination of the two approaches (normally a conventional computer is used to supervise the neural network) in order to perform at maximum efficiency.
  
THE NEURON MODEL:
A model of an ANN which has been built based on the above structure is shown below:
The McCulloch and Pitts model (MCP) of an ANN is shown below. As can be seen, here, the inputs are ‘weighted’; the effect that each input has at decision making is dependent on the weight of the particular input. The weight of an input is a number which when multiplied with the input gives the weighted input. These weighted inputs are then added together and if they exceed a pre-set threshold value, the neuron fires. In any other case the neuron does not fire.

LEARNING METHODOLOGIES
Once a network has been structured for a particular application, that network is ready to be trained. To start this process the initial weights are chosen randomly. Then, the training, or learning, begins.

The various methodologies are:

  • Supervised Learning
  • Unsupervised Learning
  • Reinforced Learning
  • Competitive Learning
  • Widrow-Hoff Learning
  • Hebbian Learning

SUPERVISED LEARNING:
    During the Training Session of the ANN, an input stimulus is applied that results in an output response. The response is then compared with a TARGET response. If the actual response differs from the Target response, the ANN generates an error signal, which is then used to calculate the adjustment that should be made to the network’s synaptic weights so that the actual output matches Target output. In other words, the error is minimized, possibly to zero. The error minimization process requires a special circuit known as the TEACHER or SUPERVISOR. Hence the Name SUPERVISED LEARNING.

The notion of a TEACHER comes from Biological observations. For example, when learning a language, we hear the sound of a word from a Teacher. The sound is stored in the Memory Banks of our Brain, and we try to reproduce the sound. When we hear our own sound, we mentally compare it with the stored sound and note the error. If the error is large, we try again and again until it becomes significantly small; then we stop. With ANNs the amount of calculation required to minimize the error depends on the Algorithm used; clearly, this is purely a Mathematical tool derived from Optimization Techniques.

UNSUPERVISED LEARNING:
This does not require a Teacher, i.e. there is no Target Output. During the Training Session, the ANN receives at its inputs many different excitations, or input patterns, and it arbitrarily organizes the patterns into categories. When a Stimulus is later applied, the ANN provides an output response indicating the class to which this Stimulus belongs. If a class cannot be found, a new class is Generated.

Even though it does not require a Teacher, it requires Guidelines to determine how it will form the Groups. Grouping may be based on Shape, color, or material consistency or on some other property of the object. Similarly to classify more comprehensive patterns efficiently, the ANNs may need some Feature Selecting guidelines initially.

REINFORCED LEARNING:
REINFORCED Learning requires one or more neurons at the output layer and a teacher that, unlike Supervised Learning, does not indicate how close the Actual Output is to the Desired output  but whether the Actual Output is the same with the Desired output or not. During the Learning phase, am Input Stimulus is applied and the Output Response is obtained. The Teacher then presents only a “Pass/ Fail” indication. Thus the error signal Generated during the training session is Binary – “Pass/Fail”. If the Teacher’s indication is “BAD”, the ANN readjusts its parameters and tries again & again until it gets its output response right. Hence, there will not be any indication of whether the ANN is moving in the Right Direction or not. Hence, certain BOUNDARIES should be established so that the Trainee should not keep trying to get the correct response ad infinituim.

COMPETITIVE LEARNING:
Competitive Learning is another form of Supervised Learning that is distinctive because of its characteristic operation and architecture. In this Scheme, several Neurons are at the output Layer. When an Input Stimulus is applied, each output Neuron competes with the other Neurons to produce the closest output signal to the Target. This output then becomes the Dominant one, and the other outputs cease producing an output signal for that Stimulus. For another Stimulus, another output Neuron becomes the dominant one and so on. Thus each output Neuron is trained to respond tom a different input Stimulus.

WIDROW-HOFF LEARNING:
This is also known as The Delta Rule. It is based on the idea of continuous adjustments of the weights such that the difference of error between the desired output value and the Actual output value of a processing element is reduced.

HEBBIAN LEARNING:
In 1949, Donald Hebb stated that when an Axon of a cell A is near enough to excite a cell B and repeatedly or persistently takes place in firing it, some growth process or metabolic changes take place in one or both cells such that A’s efficiency, as one of the cells firing B is increased. Thus the synaptic strength between the cells A & B is modified according to the degree of correlated activity between input and output. This type of learning methodology is called Hebbian Learning.

We also have The Gradient Descend Rule, in which the values of the Weights are adjusted by an amount proportional to the first derivative of the error between the desired and the actual output values of the processing element., with respect to the value of the weights.

CONCLUSION
The various learning Methodologies Discussed so far prove to be satisfactory in their respective implementations. We are already successfully using them in various applications like pattern recognition, prediction systems in Business, Speech Na Image processing, Voice Recognition etc. A lot of research is still on the way to develop more efficient methodologies that are relatively faster, require less number of iterations, less complex and economical. Let us hope that they promise a better future.

No comments:

Post a Comment

leave your opinion