Mobility Tracking System - Engineering Seminar

Mobility tracking system
The current Mobility tracking system propose an integrated scheme for tracking the mobility of a user based on autoregressive models that accurately capture the characteristics of realistic user movements in wireless networks. 
The mobility parameters are obtained from training data by computing Minimum Mean Squared Error (MMSE) estimates. Estimation of the mobility state, which incorporates the position, velocity, and acceleration of the mobile station, is accomplished via an extended Kalman filter using signal measurements from the wireless network. 
By combining mobility parameter and state estimation in an integrated framework, user obtains an efficient and accurate real-time mobility tracking scheme that can be applied in a variety of wireless networking applications. Consider two variants of an autoregressive mobility model and validate the proposed mobility tracking scheme using mobile trajectories collected from drive test data. This result validates the accuracy of the proposed tracking scheme even when only a small number of data samples is available for initial training.

Distributed Mobility Management for target tracking in mobile sensor network is used to track the Mobile user’s location with the help of Distributed database management and Bayesian Estimation. 
To ensure better tracking quality for a moving target, it is beneficial to dynamically move nodes to advantageous locations.
In this model, the mobility state consists of position, velocity, and acceleration. The linear system model is capable of capturing realistic user mobility patterns, but specification of an optimal set of model parameters is not straightforward.
By combining mobility parameter and state estimation in an integrated framework, user obtains an efficient and accurate real-time mobility tracking scheme that can be applied in a variety of wireless networking applications.

  The proposed system has overcome all problems in the existing system. A linear system model of mobility has been applied to real-time mobility tracking via various state estimation methods, such as Kalman filters. The proposed system provides a viable solution to mobility tracking for wireless networks. In this model, the mobility state consists of position, velocity, and acceleration. The proposed system consists of
     1. Distributed Database:
Collections of data (e.g. in a database) can be distributed across multiple physical locations. A distributed database is distributed into separate partitions/fragments. Each partition/fragment of a distributed database may be replicated.
    2. Mobile Sensor Network:
            It Will Work in any environment .So no Problem about Dynamic changing topology. It used the Bayesian Estimation.
Bayesian Estimation:
Bayesian means Prediction. Predict Mobile Users Movement in our project with the help of Bayesian estimation.

The proposed scheme considers node movement decisions as part of a distributed optimization problem which integrates mobility-enhanced improvement in the quality of target tracking data with the associated negative consequences of increased energy consumption due to locomotion, potential loss of network connectivity, and loss of sensing coverage.

The proposed system has following advantages,
The mobility state consists of position, velocity, and acceleration so the continuous          movement can be identified.
Locomotion energy is very low
Admin can get the particular user movements in large  scale networks  
Admin can get periodic wise transaction reports
Avoid node collision
Admin can get Angle of Area(AOA) exact processing reports
  Preliminary investigation examine project feasibility, the likelihood the system will be useful to the organization. The main objective of the feasibility study is to test the Technical, Operational and Economical feasibility for adding new modules and debugging old running system. All system is feasible if they are unlimited resources and infinite time. There are aspects in the feasibility study portion of the preliminary investigation
Economical Feasibility
Operational Feasibility
Technical Feasibility

Economical Feasibility
        The role of interface design is to reconcile the differences that prevail among the software engineer’s design model, the designed system meet the end user requirement with economical way at minimal cost within the affordable price by encouraging more of proposed system. Economic feasibility is concerned with comparing the development cost with the income/benefit derived from the developed system. In this we need to derive how this project will help the management to take effective decisions.          
Economic Feasibility is mainly concerned with the cost incurred in the implementation of the software.  Since this project is developed using VB.NET with C# and SQL Server which is more commonly available and even the cost involved in the installation process is not high. Similarly it is easy to recruit persons for operating the software since almost all the people are aware of ASP.NET with C# and SQL Server.  Even if we want to train the persons in these area the cost involved in training is also very less. Hence this project has good economic feasibility.
The system once developed must be used efficiently. Otherwise there is no meaning for developing the system.  For this a careful study of the existing system and its drawbacks are needed.  The user should be able to distinguish the existing one and proposed one, so that one must be able to appreciate the characteristics of the proposed system, the manual one is not highly reliable and also is considerably fast. The proposed system is efficient, reliable and also quickly responding.

Operational Feasibility
People are inherently instant to change and computers have been known to facilitate change. An estimate should be made to how strong a reaction the user is likely to have towards the development of the computerized system.
 The user is accustomed to computerized systems. These kinds of systems are becoming more common day by day for evaluation of the software engineers. Hence, this system is operationally feasible. 

Technical Feasibility
Technical Feasibility centers on the existing computer system hardware, software, etc. and to some extent how it can support the proposed addition. This involves financial considerations to accommodate technical enhancements. Technical support is also a reason for the success of the project.  The techniques needed for the system should be available and it must be reasonable to use. Technical Feasibility is mainly concerned with the study of function, performance, and constraints that may affect the ability to achieve the system. 

By conducting an efficient technical feasibility we need to ensure that the project works to solve the existing problem area. Since the project is designed with VB.NET with C# as Front end and SQL Server 2000 as Back end, it is easy to install in all the systems wherever needed. It is more efficient, easy and user-friendly to understand by almost everyone. Huge amount of data can be handled efficiently using SQL Server as back end.  Hence this project has good technical feasibility.
As this system is technically, economically and operationally feasible, this system is judged feasible.

           The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet.

Objectives of .NET FRAMEWORK
1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely.
2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code.
3. Eliminates the performance problems.
          There are different types of application, such as Windows-based applications and Web-based applications. To make communication on distributed environment to ensure that code be accessed by the .NET Framework can integrate with any other code.

The common language runtime is the foundation of the .NET Framework. It manages code at execution time, providing important services such as memory management, thread management, remoting and also ensures more security and robustness. The concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as unmanaged code.

It is a comprehensive, object-oriented collection of reusable types used to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services. 

The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts.
 Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form of a MIME type extension). Using Internet Explorer to host the runtime to enables embeds managed components or Windows Forms controls in HTML documents. 

          The common language runtime manages memory; thread execution, code execution, code safety verification, compilation, and other system services these are all run on CLR.

  ASP.NET is the next version of Active Server Pages (ASP). it is a unified Web development platform that provides the services necessary for developers to build enterprise-class Web applications. While ASP.NET is largely syntax compatible, it also provides a new programming model and infrastructure for more secure, scalable, and stable applications. 
          ASP.NET is a compiled, NET-based environment. The entire .NET Framework is available to any ASP.NET application. Developers can easily access the benefits of these technologies, which include the managed common language runtime environment (CLR), type safety, inheritance, and so on. 
Each of these models can take full advantage of all ASP.NET features, as well as the power of the .NET Framework and .NET Framework common language runtime. 
Accessing databases from ASP.NET applications is an often-used technique for displaying data to Web site visitors. ASP.NET makes it easier than ever to access databases for this purpose.

ASP.NET provides easy-to-use application and session-state facilities that are familiar to ASP developers and are readily compatible with all other .NET Framework APIs. ASP.NET configuration settings are stored in XML-based files, which are human readable and writable.
All ASP.NET code is compiled, rather than interpreted, which allows early binding, strong typing, and just-in-time (JIT) compilation to native code, to name only a few of its benefits. ASP.NET is also easily factorable, meaning that developers can remove modules (a session module, for instance) that are not relevant to the application they are developing.  
Develop applications using ADO.NET, will have different requirements for working with data. Developers never need to directly edit an XML file containing data - but it is very useful to understand the data architecture in ADO.NET.
ADO.NET offers several advantages over previous versions of ADO:
Performance Scalability

XML Web services are applications that can receive the requested data using XML over HTTP. XML Web services are not tied to a particular component technology or object-calling convention but it can be accessed by any language, component model, or operating system. In Visual Studio .NET, can quickly create and include XML Web services using Visual Basic, Visual C# , Jscript , Managed Extensions for C++, or ATL Server.

Extensible Markup Language (XML) provides a method for describing structured data XML is a subset of SGML that is optimized for delivery over the Web. The World Wide Web Consortium (W3C) defines XML standards so that structured data will be uniform and independent of applications. Visual Studio .NET fully supports XML, providing the XML Designer to make it easier to edit XML and create XML schemas.

Visual Basic.NET, the latest version of visual basic, includes many new features. The Visual Basic supports interfaces but not implementation inheritance.   
        Visual Basic.Net supports implementation inheritance, interfaces and overloading. In addition, Visual Basic .NET supports multithreading concept. 

Visual C#.NET is also compliant with CLS (Common Language Specification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET Framework; it manages the execution of the code and also makes the development process easier by providing services. 
Visual C#.NET is a CLS-compliant language. Any objects, classes, or components that created in Visual C#.NET can be used in any other CLS-compliant language. In addition,  use objects, classes, and components created in other CLS-compliant languages in Visual C#.NET .The use of CLS ensures complete interoperability among applications, regardless of the languages used to create the application.

Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In Visual C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.

           Garbage Collection is another new feature in Visual C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use.
In Visual C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.

Overloading is another feature in Visual C#.NET. Overloading define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, use it for constructors and properties in a class.

Visual C#.NET also supports multithreading. An application that supports users can handle multiple tasks simultaneously. Developer can use multithreading to decrease the time taken by an application to respond to user interaction. To decrease the time taken by an application to respond to user interaction. A separate thread in the application handles user interaction.

         Visual C#.NET supports structured handling. It detect and remove errors at runtime. In Visual C#.NET, need to use Try…Catch…Finally statements to create exception handlers. Using Try…Catch…Finally statements, it can create robust and effective exception handlers to improve the performance of application.

Features of SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services
SQL-SERVER database consist of six type of objects,
They are,

Internet Integration
The SQL Server 2000 database engine includes integrated XML support. It also has the scalability, availability, and security features required to operate as the data storage component of the largest Web sites. The SQL Server 2000 programming model is integrated with the Windows DNA architecture for developing Web applications, and SQL Server 2000 supports features such as English Query and the Microsoft Search Service to incorporate user-friendly queries and powerful search capabilities in Web applications.

Scalability and Availability
The SQL Server 2000 database engine can be used across platforms ranging from laptop computers running Microsoft Windows® 98 through large, multiprocessor servers running Microsoft Windows 2000 Data Center Edition. SQL Server 2000 Enterprise Edition supports features such as federated servers, indexed views, and large memory support that allow it to scale to the performance levels required by the largest Web sites.

Enterprise-Level Database Features
The SQL Server 2000 relational database engine supports the features required to support demanding data processing environments. The database engine protects data integrity while minimizing the overhead of managing thousands of users concurrently modifying the database. SQL Server 2000 distributed queries allow to reference data from multiple sources as if it were a part of a SQL Server 2000 database, while at the same time, the distributed transaction support protects the integrity of any updates of the distributed data. Replication allows you to also maintain multiple copies of data, while ensuring that the separate copies remain synchronized. 

Ease of installation, deployment, and use
SQL Server 2000 includes a set of administrative and development tools that improve upon the process of installing, deploying, managing, and using SQL Server across several sites. SQL Server 2000 also supports a standards-based programming model integrated with the Windows DNA, making the use of SQL Server databases and data warehouses a seamless part of building powerful and scalable systems. These features allow to rapidly delivering SQL Server applications that customers can implement with a minimum of installation and administrative overhead.

Data warehousing
SQL Server 2000 includes tools for extracting and analyzing summary data for online analytical processing. SQL Server also includes tools for visually designing databases and analyzing data using English-based questions.

              In mobility management, the movement decision for a node is based on whether the new location will improve tracking quality. Since a node does not know a priori the quality of sensor measurements it will get at the new location, it first predict all possible sensor measurements corresponding to all possible candidate locations that the node might choose to move to. It treats these predicted sensor measurements as true measurements, as if they were from nodes currently located at these candidate locations.
Thus the problem of making decision on where to move is viewed as the problem of selecting one of the predicted measurements that is expected to best improve the quality of tracking data. Existing Bayesian methods are fully depending on the sensor networks, as well as  Time allocation and periodicity based prediction is not possible, energy consumption is so high, admin can’t able to maintain the exact database architecture in mobile networks, so exact node prediction and fast prediction is not possible, Angle of area measurement is not possible.

The project “User’s Mobility Tracking Based on Autoregressive Models” is designed using Microsoft Visual Studio.Net 2005 as front end and Microsoft SQL Server 2000 as backend which works in .Net framework version 2.0. The coding language used is VB .Net.
Mobility management is a major challenge in mobile ad hoc networks due in part to the dynamically changing network topologies. For mobile sensor networks that are deployed for surveillance applications, it is important to use a mobility management scheme that can empower nodes to make better decisions regarding their positions such that strategic tasks such as target tracking can benefit from node movement. This project describes a distributed mobility management scheme for mobile sensor networks. 

This project focuses on the mobility management problem for mobile sensor networks. Mobility management in sensor networks is different from that in mobile ad hoc networks because the movement of sensor nodes here is not random; rather, the movement of sensor nodes is purposeful, e.g., to actively and better track an intruder. 
In such scenarios, it is important to have an efficient mobility management scheme to ensure that sensor node mobility is exploited in the best possible way, e.g., to improve the quality of target tracking. 
  At the same time, the mobility management strategy should avoid inefficient usage of scarce resources, such as energy and network bandwidth. Furthermore, the mobility management scheme should also take into account the potential negative consequences of node movement, e.g., loss of area coverage, loss of connectivity, and degradation of network performance. 
In addition, node movement also involves locomotion energy and routing overhead, especially the need to reestablish routes. Therefore, a practical mobility management scheme should empower a node with the ability to determine whether it should move and where it should move to such that the movement can enhance tracking quality without depleting scarce resources or significantly compromising coverage and network connectivity. 
In this project, present an efficient mobility management scheme that can be implemented in a fully distributed manner. The proposed mobility management scheme is a general framework that incorporates both the positive and negative consequences of node movement; it allows a node to autonomously decide whether it should move and where it should move 

             The project includes the following modules
1. MU Details (Mobile user)
2. DB Management
3. Mobile Node management (hlr,vlr)
4. Path Details (locations)
5. Tracking 

1.   MU Details (Mobile user)
Mobile user contains managing user details ,PTN (Personal telecommunication number) and Registration details. User details can split into the following types 
1. Profile 
                  2. Indexing
2. DB Management
Database management contains Distributed server details; the distributed database management contains multiple architectures, DB0, DB1, and DB2

The DB0 consists of an index file and a data file. With the location-independent numbering plan being adopted, every subscriber in the whole mobile system has an entry in the index file. If the direct file is used, each index entry only contains a pointer. When a user is residing in the current DS area, the pointer is pointing to the user’s service profile stored in the data file. The user service profile contains a pointer to the DB1 where the user is visiting. When the user is staying in another DS, the pointer in the user’s index entry points to the DB0 associated with that DS.               
All entries in the index file are allocated the same size of storage and stored in increasing order of the users’ PTNs so that direct addressing can be used to retrieve a record from the index file. Note that the PTN does not need to be stored in the index entry.   On the other hand, the T-tree or the B -tree needs to include the PTN in each index entry and store other index management information, thus requiring more memory capacity than the direct file. Therefore, the direct file is the best choice for the index file of the DB0. In the data file, each user residing in the current DS area is allocated a record to store the user’s service profile. 
           Note that the access time of the DB0 is independent of the database size when the direct file technique is employed (but the access time is affected by the access frequency of the DB0). This scalability feature is very useful for future mobility applications since the number of subscribers is expected to increase steadily. 

            Each DB1 consists only of one part: the index file, in which each user currently residing in the DB1 area has a data item. Each data item in the index file consists of two fields: the user’s PTN and a pointer to the DB2 the user is currently visiting. No other user information is stored in the DB1. Results from Section V reveal that the T-tree is a preferable technique for the index file of DB1.

Each of the database DB2s consists of two parts: the index file and the data file. Each user currently residing in the DB2 area has an entry in the index.

 Each entry in the index consists of two fields: the user’s PTN and a pointer to the user record in the data file that stores the service profiles for each user currently visiting this DB2 area. 

Mobile Node Management (hlr, vlr)
Managing the mobile node address and movement of the mobile node based on the loss of energy. And when the user will move, join or leave the information will exchange the server.

Path Details (locations)
       The path details store the detail about tower reaching areas and directions, location details. It is used to very easily find out the particular user locations.

             Tracking is used to predict the sensor measurements at candidate locations. Measurement can be done by the use of Bayesian estimation method. Sensor measurements method is used to calculate the target estimate. The node can make a decision on where to move. 

         Design is multi-step process that focuses on data structure software architecture, procedural details, (algorithms etc.) and interface between modules. The design process also translates the requirements into the presentation of software that can be accessed for quality before coding begins.
         Computer software design changes continuously as new methods; better analysis and broader understanding evolved. Software Design is at relatively early stage in its revolution. 
         Therefore, Software Design methodology lacks the depth, flexibility and quantitative nature that are normally associated with more classical engineering disciplines. However techniques for software designs do exist, criteria for design qualities are available and design notation can be applied.

 After the source code has been completed, documented as related data structures. Completed the project has to undergo testing and validation where there is subtitle and definite attempt to get errors. The project developer treads lightly, designing and execution test that will demonstrates that the program works rather than uncovering errors, unfortunately errors will be present and if the project developer doesn’t find them, the user will find out.
The project developer is always responsible for testing the individual units i.e. modules of the program. In many cases developer also conducts integration testing i.e. the testing step that leads to the construction of the complete program structure. 
This project has undergone the following test procedures to ensure its correctness
1. Unit Testing
2. Validation Testing
3. Integration Testing

The procedure level testing is made first. By giving improper inputs, the errors occurred are noted and eliminated. 
            For example, storage of data to the table in the correct manner.
            In the company as well as seeker registration form, the zero length username and password are given and checked. Also the duplicate username is given and checked. 
The data are entered in wrong manner and checked. Wrong email-id and web site URL (Universal Resource Locator) is given and checked.
In login form user ID is ‘user’ and password is ‘user’. The duplicate username is given it can’t be login. This form accepting only correct value. Then the login has been done successfully. In this form unit test held successfully.

The final step involves Validation testing, which determines whether the software function as the user expected. The end-user rather than the system developer conduct this test most software developers as a process called “Alpha and Beta Testing” to uncover that only the end user seems able to find.
The compilation of the entire project is based on the full satisfaction of the end users. In the project, validation testing is made in various forms. In question entry form, the correct answer only will be accepted in the answer box. 
In this form username and password are given and checked. The duplicate username also given and checked. The validation test held successfully.

                 Testing is done for each module. After testing all the modules, the modules are integrated and testing of the final system is done with the test data, specially designed to show that the system will operate successfully in all its aspects conditions. Thus the system testing is a confirmation that all is correct and an opportunity to show the user that the system works.

 Implementation is the most crucial stage in achieving a successful system and giving the user’s confidence that the new system is workable and effective. Implementation of modified application to replace an existing one. This type of conversation is relatively easy to handle, provide there are no major changes in the system.  
 Each program is tested individually at the time of development using the data and has verified that this program linked together in the way specified in the programs specification, the computer system and its environment is tested to the satisfaction of the user. The system that has been developed is accepted and proved to be satisfactory for the user. And so the system is going to be implemented very soon. A simple operating procedure is included so that the user can understand the different functions clearly and quickly.
Initially as a first step the executable form of the application is to be created and loaded in the common server machine which is accessible to the entire user and the server is to be connected to a network. The final stage is to document the entire system which provides components and the operating procedures of the system.  

              It is concluded that the application works well and satisfy the users. The application is tested very well and errors are properly debugged. The application is simultaneously accessed from more than one system. Simultaneous login from more than one place is tested. 
In previous application was difficult to invoke the mobile user’s location. This project contains position, velocity and acceleration to find the mobile user accuracy for overcome existing problems.
            The application works according to the restrictions provided in their respective browsers. The speed of tracking become more enough now. The continuous user’s movement can be identified easily. In this application node collision can be avoided so easily tracking the user’s movement. It is used to find out the particular user locations very easily.
The main benefit of this application is to show the user movements in mapping. Map is used to tracking user movements effectively.

         Every application has its own merits and demerits. The project has covered almost all the requirements. Further requirements and improvements can easily be done since the coding is mainly structured or modular in nature. Changing the existing modules or adding new modules can append improvements. Further enhancements can be made to the application, so that the functionalities of the system will be attractive and useful than the present one. 
         This project will give user movements in mapping method. In future it will give user movements in world map. User in offline mode can also be added to the future project. This will give more effective to the application.

No comments:

Post a Comment

leave your opinion