ABSTRACT
Recently, several papers have proposed pseudo dynamic methods for automatic handwritten signature verification. Each of these papers uses texture measures of the gray level signature strokes. This paper explores the usefulness of local binary pattern (LBP) and local directional pattern (LDP) texture measures to discriminate off-line signatures. A comparison between several texture normalizations is made so as to look for reducing pen dependence. The experiments conducted with MCYT off-line and GPDS960Graysignature corpuses show that LDPs are more useful than LBPs for automatic verification of static signatures. Additionally, the results show that the LDP codes of the contour are more discriminating than the LDPs of the stroke interior, although their combination at score level improves the overall scheme performance. The results are obtained by modeling the signatures with a Support Vector Machine (SVM) trained with genuine samples and random forgeries, while random and simulated forgeries have been used for testing it.
INTRODUCTION
OFFLINE handwritten text recognition is one of the most active areas of research in computer science and it is inherently difficult because of the high variability of writing styles. High recognition rates are achieved in character recognition and isolated word recognition, but we are still far from achieving high-performance recognition systems for unconstrained offline handwritten texts. Automatic handwriting recognition systems normally include several preprocessing steps to reduce variation in the handwritten texts as much as possible and, at the same time, to preserve information that is relevant for recognition. There is no general solution to preprocessing of offline handwritten text lines, but it typically relies on slope and slant correction and normalization of the size of the characters. With the slope correction, the handwritten word is horizontally rotated such that the lower baseline is aligned to the horizontal axis of the image. Slant is the clockwise angle between the vertical direction and the direction of the vertical text strokes. Slant correction transforms the word into an upright position. Ideally, the removal of slope and slant results in a word image that is independent of these factors. Finally, size normalization tries to make the system invariant to the character size and to reduce the empty background areas caused by the ascenders and descenders of some letters. This paper presents new techniques to remove the slope and the slant from handwritten text lines and to normalize the size of the text images by using Artificial Neural Networks (ANNs). Local extrema from a text image classified as belonging to the lower baseline by a Multilayer Perceptron (MLP) are used to accurately estimate the slope and the horizontal alignment. Slant is removed in a non-uniform way by also using ANNs. Finally, another MLP computes the reference lines of the slope and slant corrected text in order to normalize its size.
EXISTING SYSTEM
Our existing system handwritten character recognition using Modified Direction Feature (MDF), it is nothing but a system which recognize a hand written character Modified Direction Feature (MDF) generated encouraging results, reaching an accuracy of 81.58%.
In this system each and every hand written character of a separate person is scanned and stored in database the scanned images are verified using MDF.
PROPOSED SYSTEM
Our proposed system is Off-line Signature Verification using the Enhanced Modified Direction Feature and Neural-based Classification in which we are using MDF with signature images. Specifically, a number of features have been combined with MDF, to capture and investigated various structural and geometric properties of the signatures to perform verification or identification of a signature, several steps must be performed. After preprocessing all signatures from the database by converting them to portable bitmap (PBM) format, their boundaries are extracted to facilitate the extraction of features using MDF .Verification experiments are performed with classifiers We are using Radial Basis Function (RBF) which is a classifier which gives an accuracy level of 91.21%
Advantage of proposed system
- Accuracy level of 91.21% which very high when compared to the existing system
- It is very time saving
- It is user friendly
SOFTWARE REQUIREMENT
Operating system : Windows XP Professional
Front End : Microsoft Visual Studio .Net 2008
Coding Language : VB.NET
HARDWARE REQUIREMENT
SYSTEM : Pentium IV 2.4 GHz
HARD DISK : 40 GB
FLOPPY DRIVE : 1.44 MB
MONITOR : 15 VGA colour
MOUSE : Logitech.
RAM : 256 MB
KEYBOARD : 110 keys enhanced.
MODULES
There are four modules available in this project they are
- Authentication
- Preprocessing
- Feature extraction
- Classification
Authentication
The first module of Off-line Signature Verification using the Enhanced Modified Direction Feature and Neural-based Classification is authentication. Authentication is done to secure the application from unauthorized user. The username and password is checked and the unauthorized user is ignored. The user can access the application if the username and password is valid. As it is the first module of the project it gives security to our application.
Preprocessing
Pre processing is nothing but a process in which input is an image the input image is converted into .pbm format which is a bitmap format And sent for further execution the purpose of converting it into bitmap format is that in second module we are going extract the boundaries of the signature if it is in bitmap format it would easy for the boundary extraction
Feature extraction
In feature extraction the boundaries of the signature image is extracted using MDF (modified extraction feature) for further modification purpose of extraction of the signatures boundaries is that
It would be easy for the classifier to identify and verify the signature because in the in the Feature extraction the size of the image is reduced.
Classification
In classification process the input is an image file the classifier verifies and identifies the signature this is the last module of this project which uses trained classifier which gives an accuracy of about 91.21% which much greater than the existing system
NEOAPP-PROFILE
NeoApp develops custom software solutions for companies in a variety of industries. Since it’s beginning in August 2008, NeoApp has offered efficient, reliable and cost-effective solutions with good quality by implementing CMMI practices from it’s development facility located at Hyderabad.
NeoApp has expertise in latest technologies and caters to your exacting requirements. NeoApp helps you from concept to completion of a project with full range of service offerings.
Most importantly, NeoApp combines the right strategy with the right products and the right people, ensuring technical superiority, high quality deliverables and timely implementations. NeoApp supports different delivery and billing models to fit your requirements. By having NeoApp involved with your software development projects, you benefit with reduced costs and faster development cycles. To reduce the development costs NeoApp strictly adhere on reusable component model with plug and play architecture.
Offshore outsourcing model became easily adoptable and has increased benefits beyond cost reductions. The offshore outsourcing with NeoApp includes full spectrum services and multi fold benefits.
NeoApp, with its experience in executing offshore projects ranging from large enterprise solutions to small plug-in applications, helps customers achieve the offshore outsourcing goals.
NeoApp establishes suitable project execution methodologies for each project and accomplishes offshore execution on time and on budget. NeoApp pays high importance to quality of deliverables and has mandatory quality doors in place for each project ensuring success of the overall project.
NeoApp works with you from conceptualization to completion and has the required expertise to pick up the project at any stage in its life cycle.
Business concept and system study
Requirement Study
Design Architecture and develop specifications
Design the framework of the solution
Develop the solution
QA the solution against requirements
Continuous support for the solution
Develop prototypes for proof of concept
Engineer the solution
Release as per plan
The team and project approach of NeoApp has resulted in above expected deliveries of projects. NeoApp works with you in refining the project at every stage and with its vast and experienced talent pool, NeoApp brings value with innovation to the project.
NeoApp offers complete solutions to application maintenance requirements helping organizations to cut costs and optimize resource utilization. NeoApp performs the following tasks on a variety of technology platforms beginning with Legacy to Client Server to Browser based internet application.
Application Development
Application Maintenance
Application Support
NeoApp with its experience in wide range technologies and ability to learn quickly help you ensuring availability of your systems to your customers. NeoApp performs systems monitoring and undertakes evolutionary development of these applications as required and deemed fit.
SYSTEM ANALYSIS
EXISTING SYSTEM
Our existing system handwritten character recognition using Modified Direction Feature (MDF), it is nothing but a system which recognize a hand written character Modified Direction Feature (MDF) generated encouraging results, reaching an accuracy of 81.58%.
In this system each and every hand written character of a separate person is scanned and stored in database the scanned images are verified using MDF.
Disadvantage of the existing system
- Accuracy of 81.58% is very less when compared to existing system
- since each and every hand written character of a separate person is scanned and stored in database it is very time consuming and it takes more manpower
- Since handwritten character recognition is not a most important identity of a human being this system is not widely used
PROPOSED SYSTEM
Our proposed system is Off-line Signature Verification using the Enhanced Modified Direction Feature and Neural-based Classification in which we are using MDF with signature images. Specifically, a number of features have been combined with MDF, to capture and investigated various structural and geometric properties of the signatures to perform verification or identification of a signature, several steps must be performed. After preprocessing all signatures from the database by converting them to portable bitmap (PBM) format, their boundaries are extracted to facilitate the extraction of features using MDF .Verification experiments are performed with classifiers We are using Radial Basis Function (RBF) which is a classifier which gives an accuracy level of 91.21%
Advantage of proposed system
- Accuracy level of 91.21% which very high when compared to the existing system
- It is very time saving
- It is user friendly
FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.
OPERATIONAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.
ANALYSIS MODEL
The model that is basically being followed is the WATER FALL MODEL, which states that the phases are organized in a linear order. First of all the feasibility study is done. Once that part is over the requirement analysis and project planning begins. If system exists one and modification and addition of new module is needed, analysis of present system can be used as basic model.
The design starts after the requirement analysis is complete and the coding begins after the design is complete. Once the programming is completed, the testing is done. In this model the sequence of activities performed in a software development project are: -
Here the linear ordering of these activities is critical. End of the phase and the output of one phase is the input of other phase. The output of each phase is to be consistent with the overall requirement of the system. Some of the qualities of spiral model are also incorporated like after the people concerned with the project review completion of each of the phase the work done.
WATER FALL MODEL was being chosen because all requirements were known beforehand and the objective of our software development is the computerization/automation of an already existing manual working system.
WATER FALL MODEL
Three Tier Architecture in ASP.NET
3-tier application is a program which is organized into three major disjunctive tiers on layers. Here we can see that how these layers increase the reusability of codes.
These layers are described below.
1. Application layer
2. Business layer
a. Property layer(Sub layer of business layer)
3. data layer
Advantages of three Tier Architecture.
The main characteristic of a Host Architecture is that the application and databases reside on the same host computer and the user interacts with the host using an unfriendly and dump terminal. This architecture does not support distributed computing (the host applications are not able to connect a database of a strategically allied partner). Some managers found that developing a host application take too long and it is expensive. Consequently led these disadvantages to Client-Server architecture.
Client-Server architecture is 2-Tier architecture because the client does not distinguish between Presentation layer and business layer. The increasing demands on GUI controls caused difficulty to manage the mixture of source code from GUI and Business Logic (Spaghetti Code). Further, Client Server Architecture does not support enough the Change Management. Let suppose that the government increases the Entertainment tax rate from 4% to 8 %, then in the Client-Server case, we have to send an update to each clients and they must update synchronously on a specific time otherwise we may store invalid or wrong information. The Client-Server Architecture is also a burden to network traffic and resources. Let us assume that about five hundred clients are working on a data server then we will have five hundred ODBC connections and several ruffian record sets, which must be transported from the server to the clients (because the Business layer is stayed in the client side). The fact that Client-Server does not have any caching facilities like in ASP.NET, caused additional traffic in the network. Normally, a server has a better hardware than client therefore it is able compute algorithms faster than a client, so this fact is also an additional pro argument for the 3.Tier Architecture. This categorization of the application makes the function more reusable easily and it becomes too easy to find the functions which have been written previously. If programmer wants to make further update in the application then he easily can understand the previous written code and can update easily.
Application layer or Presentation layer
Application layer is the form which provides the user interface to either programmer of end user. Programmer uses this layer for designing purpose and to get or set the data back and forth.
Business layer
This layer is a class which we use to write the function which works as a mediator to transfer the data from Application or presentation layer data layer. In the three tier architecture we never let the data access layer to interact with the presentation layer.
Property Layer
This layer is also a class where we declare the variable corresponding to the fields of the database which can be required for the application and make the properties so that we can get or set the data using these properties into the variables. These properties are public so that we can access its values.
Data Access Layer
This layer is also a class which we use to get or set the data to the database back and forth. This layer only interacts with the database. We write the database queries or use stored procedures to access the data from the database or to perform any operation to the database.
Summary
o Application layer is the form where we design using the controls like textbox, labels, command buttons etc.
o Business layer is the class where we write the functions which get the data from the application layer and passes through the data access layer.
o Data layer is also the class which gets the data from the business layer and sends it to the database or gets the data from the database and sends it to the business layer.
o Property layer is the sub layer of the business layer in which we make the properties to sent or get the values from the application layer. These properties help to sustain the value in a object so that we can get these values till the object destroy.
Data flow from application layer to data layer
You can download sample three tier project, used for this tutorial. Here we are passing the code of the student to the business layer and on the behalf of that getting the data from the database which is being displayed on the application layer.
N-Tier Applications:
N-Tier Applications can easily implement the concepts of Distributed Application Design and Architecture. The N-Tier Applications provide strategic benefits to Enterprise Solutions. While 2-tier, client-server can help us create quick and easy solutions and may be used for Rapid Prototyping, they can easily become a maintenance and security night mare
The N-tier Applications provide specific advantages that are vital to the business continuity of the enterprise. Typical features of a real life n-tier may include the following:
· Security
· Availability and Scalability
· Manageability
· Easy Maintenance
· Data Abstraction
The above mentioned points are some of the key design goals of a successful n-tier application that intends to provide a good Business Solution.
Definition:
Simply stated, an n-tier application helps us distribute the overall functionality into various tiers or layers:
· Presentation Layer
· Business Rules Layer
· Data Access Layer
Each layer can be developed independently of the other provided that it adheres to the standards and communicates with the other layers as per the specifications.
This is the one of the biggest advantages of the n-tier application. Each layer can potentially treat the other layer as a ‘Block-Box’.
In other words, each layer does not care how other layer processes the data as long as it sends the right data in a correct format.
The Presentation Layer:
Also called as the client layer comprises of components that are dedicated to presenting the data to the user. For example: Windows/Web Forms and buttons, edit boxes, Text boxes, labels, grids, etc.
The Business Rules Layer:
This layer encapsulates the Business rules or the business logic of the encapsulations. To have a separate layer for business logic is of a great advantage. This is because any changes in Business Rules can be easily handled in this layer. As long as the interface between the layers remains the same, any changes to the functionality/processing logic in this layer can be made without impacting the others. A lot of client-server apps failed to implement successfully as changing the business logic was a painful process.
The Data Access Layer:
This layer comprises of components that help in accessing the Database. If used in the right way, this layer provides a level of abstraction for the database structures. Simply put changes made to the database, tables, etc do not affect the rest of the application because of the Data Access layer. The different application layers send the data requests to this layer and receive the response from this layer.
The current application is being developed by taking the 3-tier architecture as a prototype. The 3-tier architecture is the most common approach used for web applications today. In the typical example of this model, the web browser acts as the client, IIS handles the business logic, and a separate tier MS-SQL Server handles database functions.
Although the 3-tier approach increases scalability and introduces a separation of business logic from the display and database layers, it does not truly separate the application into specialized, functional layers. For prototype or simple web applications, the 3-tier architecture may be sufficient. However, with complex demands placed on web applications, a 3-tiered approach falls short in several key areas, including flexibility and scalability. These shortcomings occur mainly because the business logic tier is still too broad- it has too many functions grouped into one tier that could be separated out into a finer grained model.
The proposed system can be designed perfectly with the three tier model, as all layers are perfectly getting set as part of the project. In the future, while expanding the system, in order to implement integration touch points and to provide enhanced user interfaces, the n-tier architecture can be used. The following diagram will represent the typical n-tier architecture.
SYSTEM REQUIREMENT SPECIFICATION
NON-FUNCTIONAL REQUIREMENTS
Performance Requirements:
Good band width, less congestion on the network. Identifying the shortest route to reach the destination will all improve performance.
Safety Requirements:
No harm is expected from the use of the product either to the OS or any data.
Product Security Requirements:
The product is protected from un-authorized users from using it. The system allows only authenticated users to work on the application. The users of this system are organization and ISP administrator.
Software Quality Attributes:
The product is user friendly and its accessibility is from the client. The application is reliable and ensures its functioning maintaining the ISP web service is accessible to the various organizations. As it is developed in .Net it is highly interoperable with OS that have provided support for MSIL (Server side). The system requires less maintenance as it is not installed on the client but hosted on the ISP. The firewall, antivirus protection etc is provided by the ISP.
SYSTEM REQUIREMENTS
SOFTWARE REQUIREMENTS
Operating system : Windows XP Professional
Front End : Microsoft Visual Studio .Net 2008
Coding Language : VB.NET
HARDWARE REQUIREMENTS
SYSTEM : Pentium IV 2.4 GHz
HARD DISK : 40 GB
FLOPPY DRIVE : 1.44 MB
MONITOR : 15 VGA colour
MOUSE : Logitech.
RAM : 256 MB
KEYBOARD : 110 keys enhanced.
TESTING
SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.
Integration testing
Integration tests are designed to test integrated software components to determine if they actually run as one program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at exposing the problems that arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures: interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined.
System Test
System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.
White Box Testing
White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works.
Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases.
Test objectives
· All field entries must work properly.
· Pages must be activated from the identified link.
· The entry screen, messages and responses must not be delayed.
Features to be tested
· Verify that the entries are of the correct format
· No duplicate entries should be allowed
· All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
CONCLUSION
In this paper, we have presented a hybrid HMM/ANN system for recognizing unconstrained offline handwritten text lines. The key features of the recognition system are the novel approach to preprocessing and recognition, which are both based on ANNs. The preprocessing is based on
using MLPs: . to clean and enhance the images, . to automatically classify local extrema in order to correct the slope and to normalize the size of the text lines images, and . to perform a nonuniform slant correction. The recognition is based on hybrid optical HMM/ANN models, where an MLP is used to estimate the emission probabilities. The main property of ANNs which is useful for preprocessing tasks is their ability to learn complex nonlinear input-output relationships from examples. Used for regression, an MLP can learn the appropriate filter from examples. We have exploited this property to clean and enhance the text images. Used for classification, MLPs can be used to determine the membership of interest points from the image to the reference lines, which is useful for slope correction and size normalization, and to locally detect slant in a text image. This preprocessing behaved favorably when compared to other preprocessing techniques.
No comments:
Post a Comment
leave your opinion