MULTI USER SECURITY FOR MULTICAST COMMUNICATION




The simplest computer chatting is a method of sending, receiving, and storing typed messages with a network of users. This network could be WAN (Wide Area Network) or LAN(Local Area Network). Our chatting system will deal only with LAN’s (static IP address) and it is made up of two applications one runs on the server side (any computer on the network you choose it to be the server) while the other is delivered and executed on the client PC. Every time the client wants to chat he runs the client application, enter his user name, host name where the server application is running, and hits the connect button and start chatting. The system is many-to-many arrangement; every–one is able to “talk” to anyone else. Messages may be broadcasted to all receivers
(recipients are automatically notified of incoming messages) or sent to special individuals (private chatting through server) where during this operation all messages are encrypted at the sender side and decrypted at the recipient to disallow any hackers to the server from reading these private messages.


SYSTEM STUDY


2.1 objective & Scope of the Project
The system is proposed is user friendly. This provides a graphical user interface. Here we providing the visual effects which reduce the inefficiency of the existing one by providing appropriate message and data entering such as textbox, button etc.
                        The software can be used by any one who has the knowledge about the computer a little bit. The user does not meet any complication by using the system, as it satisfies their needs. The system done within sight in to the necessary modification that may be required for the future. The system is user friendly, compatible, quick responding, providing the accurate information.


SYSTEM ANALYSIS


3.1 Introduction
          Systems analysis is the interdisciplinary part of Science, dealing with analysis of sets of interacting entities, the systems, often prior to their automation as computer systems, and the interactions within those systems. This field is closely related to operations research. It is also "an explicit formal inquiry carried out to help someone, referred to as the decision maker, identify a better course of action and make a better decision than he might otherwise have made.

That part or aspect of systems analysis that concentrates on finding out whether an intended course of action violates any constraints is referred to as FEASIBILITY analysis. A systems analysis in which the alternatives are ranked in terms of effectiveness for fixed cost or in terms of cost for equal effectiveness is referred to as COST-EFFECTIVENESS analysis. COST- BENEFIT ANALYSIS is a study where for each alternative the time stream of costs and the time stream of benefits (both in monetary units) are discounted  to yield their present values. The comparison and ranking are made in terms of net benefits (benefits minus cost) or the ratio of benefits to costs.
 In risk-BENEFIT ANALYSIS , cost (in monetary units) is assigned to each risk so as to make possible a comparison of the discounted sum of these costs with the discounted sum of benefits that are predicted to result from the decision. The risks considered are usually events whose probability of occurrence is low, but whose adverse consequences would be important (e.g., events such as an earthquake or explosion of a plant). See: operations research (IIASA)




CLIENT
1. Message Tabs: These are the conversation tabs. All conversation windows are kept within these tabs. Messages received are in crypted form .Convert this crypted message to readable messages
2. Message Entry Field: This is place at the bottom of the window. This is where the user enters whatever message he/she wants to send. Message is sent by either pressing enter or pressing the send button. Messages are convert to cipher text before sending  . Where the message is sent depends on which tab is open
3. Online Group List: This list shows all the groups which are logged in at the server. Clicking on a group will open a conversation window with him. The systems convert these messages into ciphered messages and send it to the group specified. File Transfer is Also Possible.
4. Online User List: This list shows all the users who are logged in at the server. Double clicking on a user will open a conversation window with him.
5. Configure Dialog: This dialog is shown when option is selected from the menu. This dialog will allow new values and saving to configuration file. You can change server host name and port.
6. Main Menu: The options available for the server. The options include Connect, disconnect, configure, exit, close current tab, close all tabs,Help
Modules are:


Single Chatting
In this module a user can chat with only one online member at a time.




Group Chatting


User can create a list of online members and send messages and files to group members at a time.


Send Files


User can send any file to online members.




3.7 Feasibility Analysis
A Feasibility Study is a process which defines exactly what a project is and what strategic issues need to be considered to assess its feasibility, or likelihood of succeeding. Feasibility studies are useful both when starting a new business, and identifying a new opportunity for an existing business. Ideally, the feasibility study process involves making rational decisions about a number of enduring characteristics of a project, including:
What exactly is the project? Is it possible? Is it practicable? Can it be done?
Economic feasibility, technical feasibility, schedule feasibility, and operational feasibility - are the benefits greater than the costs?
Technical feasibility - do we 'have the technology'? If not, can we get it?
Schedule feasibility - will the system be ready on time?
Customer profile: Estimation of customers/revenues.
Determination of competitive advantage.
Operational feasibility - do we have the resources to build the system? Will the system be acceptable? Will people use it?
Current market segments: projected growth in each market segment and a review of what is currently on the market.
Vision/mission statement.
Definition of proposed operations/management structure and management method



Five common factors (TELOS)                                            


3.7.1  Technology and system feasibility


This involves questions such as whether the technology needed for the system exists, how difficult it will be to build, and whether the firm has enough experience using that technology. The assessment is based on an outline design of system requirements in terms of Input, Processes, Output, Fields, Programs, and Procedures. This can be quantified in terms of volumes of data, trends, frequency of updating, etc. in order to estimate whether the new system will perform adequately or not.
         The system is developed as a distributed application using ASP.Net and Visual C#.Net running on any .NET framework compatible software platform. The SQL Server 2005, which is a powerful RDBMS, is used as backend.
3.7.2  Economic Feasibility
                   To evaluate the effectiveness of the system, economic analysis had been conducted prior to the development of the system. In the light of the result of the study conducted, it can be stated that the new system is cost effective and the benefits achieved outweighs the cost incurred in the development of the system. The following things were considered in the cost-benefit analysis


Ø  Faster Information retrieval
Ø  Accurate and reliable data
Ø  Speeding up of operations



3.7.3 Legal feasibility


Determines whether the proposed system conflicts with legal requirements, e.g. a Data Processing system must comply with the local Data Protection Acts. When an organization has either internal or external legal counsel, such reviews are typically standard. However, a project may face legal issues after completion if this factor is not considered at this stage.
3.7.4 Operational feasibility


Is a measure of how well a proposed system solves the problems, and takes advantages of the opportunities identified during scope definition and how it satisfies the requirements identified in the requirements analysis phase of system development.
3.7.5 Schedule feasibility


A project will fail if it takes too long to be completed before it is useful. Typically this means estimating how long the system will take to develop, and if it can be completed in a given time period using some methods like payback period. Schedule feasibility is a measure of how reasonable the project timetable is. Given our technical expertise, are the project deadlines reasonable? Some projects are initiated with specific deadlines. You need to determine whether the deadlines are mandatory or desirable.




SYSTEM DESIGN




4.1 Introduction
Systems design is the process or art of defining the architecture, components, modules, interfaces, and data for a system to satisfy specified requirements. One could see it as the application of systems theory to product development. If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development[3], then design is the act of taking the marketing information and creating the design of the product to be manufactured.
4.2 Systems design


Requirements analysis - analyzes the needs of the end users or customers
Benchmarking — is an effort to evaluate how current systems are used
Architectural design - creates a blueprint for the design with the necessary specifications for the hardware, software, people and data resources. In many cases, multiple architectures are evaluated before one is selected.
Design — designers will produce one or more 'models' of what they see a system eventually looking like, with ideas from the analysis section either used or discarded. A document will be produced with a description of the system, but nothing is specific — they might say 'touchscreen' or 'GUI operating system', but not mention any specific brands;
Computer programming and debugging in the software world, or detailed design in the consumer, enterprise or commercial world - specifies the final system components.
System testing - evaluates the system's actual functionality in relation to expected or intended functionality, including all integration aspects.




4.3 Input design
The input design is the process of converting the user-oriented description of input to the computer based business information system into a program-oriented specification. The objective of input design is to create an input layout that is easy to follow and prevent operator errors. It covers all phases of input right from creation of initial data into actual entry of data to the system for processing. The input design is the tool that ties the system to the world of its users.


Input design is mainly concentrated on estimating what the inputs are and how they have to be arranged on the input screen, how frequently the ideas are to be collected.
           The input screens are designed in such a manner that avoids confusion and guides the user in the correct track. Although study has been made on the type and how the input form is to be designed. Some inputs from the user may cause severe error and is strictly validated.
            The layout of the input screen is also taken into account. A very good look and feel is provided through the organized arrangement of controls such as menus, buttons etc.
  Input screen  are very simple and user friendly. Users are allowed to access the software only after the authentication process.


4.4 Output design
                        The output design generally refers to the results generated by the system. For many end-users, output is the main reason for developing the system and the basis on which they evaluate the usefulness of the application.
                        The objective of a system finds its shape in terms of the output. The analysis of the objective of a system leads to the determinations of outputs. The most common type of output is screen displays.
                         The output also varies in terms of their contents, frequency, timing and format. The users of output, its purpose and sequence of details to be printed are all considered. if the outputs are inadequate in any way, the system itself is inadequate.
                        The basic requirements of outputs are it should be accurate, timely and appropriate. When designing output, the system analyst must accomplish things like, determining what information to be present, whether to display or print the information, select output medium and to decide how to distribute the output to intended recipients.
   The types of outputs are
ü  External Outputs
ü  Internal Outputs
ü  Interactive Outputs


4.4.1 External outputs:
                        These are the types of outputs, whose destination will be outside the organization and which requires special attention as they projects the image of the organization.


4.4.2 Internal outputs :
These are the types of outputs, whose destination will be outside the organization. It is to be carefully designed, as they are the user’s main interface with the system.


4.4.3 Interactive outputs:
           These are the types of outputs, which the user uses in communicating directly with the system




4.5 Database design :
Database design is the process of producing a detailed data model of a database. This logical data model contains all the needed logical and physical design choices and physical storage parameters needed to generate a design in a Data Definition Language, which can then be used to create a database. A fully attributed data model contains detailed attributes for each entity.
The term database design can be used to describe many different parts of the design of an overall database system. Principally, and most correctly, it can be thought of as the logical design of the base data structures used to store the data. In the relational model these are the tables and views. In an object database the entities and relationships map directly to object classes and named relationships. However, the term database design could also be used to apply to the overall process of designing, not just the base data structures, but also the forms and queries used as part of the overall database application within the database management system (DBMS).                 
4.5.1Tables


       Login


Field
Data Type
Size
name
varchar
50
password
varchar
50




LANGUAGE DESCRIPTION




7.1 .NET Platform
          The .NET Framework is Microsoft's platform for building applications that have visually stunning user experiences, seamless and secure communication, and the ability to model a range of business processes. By providing you with a comprehensive and consistent programming model and a common set of APIs, the .NET Framework helps you to build applications that work the way you want, in the programming language you prefer, across software, services, and devices.
7.1.2 Features
Secure, Multi-Language Development Platform.
Developers and IT professionals can count on .NET as a powerful and robust software development technology that provides the security advancements, management tools, and updates you need to build, test, and deploy highly reliable and secure software. .NET supports the programming language you prefer by providing one multi-language development platform, so you can choose how you want to work. The Common Language Runtime (CLR) provides support for powerful, static languages like Visual Basic© and Visual C#©, and the advent of the Dynamic Language Runtime (DLR) means that dynamic languages, such as Managed Jscript, IronRuby and IronPython, are also supported.
Rapid, Model-Driven Development Paradigm.
.NET offers pioneering solutions that enable rapid application development and result in dramatic increases in productivity. For example, the new ADO.NET Entity Framework offers a model-based development paradigm and a standards-based framework that raises the level of abstraction for database programming, allowing developers to cleanly separate one's business logic, data and user interface. By programming against a conceptual application


model instead of programming directly against a relational storage schema, developers can greatly reduce the amount of code and maintenance required for data-oriented applications.
Next-Generation User Experiences.
 Windows Presentation Foundation (WPF) provides a unified framework for building applications and high-fidelity experiences in Windows Vista that blend together application UI, documents, and media content, while exploiting the full power of the computer. WPF offers developers support for both 2D and 3D graphics, hardware accelerated effects, scalability to different form factors, interactive data visualization, and superior content readability. Further, with a common file format (XAML), designers can become an integral part of the development process by working alongside developers in a workflow that promotes creativity while maintaining full fidelity.
Cutting-Edge Web Application Development.
ASP.NET is a free technology that enables Web developers to create anything from small, personal Web sites through to large, enterprise-class dynamic Web applications. Microsoft's free AJAX (Asynchronous JavaScript and XML) framework – ASP.NET AJAX – enables developers to quickly create more efficient, more interactive, and highly personalized Web experiences that work across all of the most popular browsers. And the new ASP.NET Dynamic Data functionality in Visual Studio 2008 uses a rich scaffolding framework that allows rapid data-driven Web development without writing any code.




Secure, Reliable Web Services.
The service-oriented programming model of Windows Communication Foundation (WCF) is built on the Microsoft .NET Framework and simplifies development of connected systems and ensures interoperability. Windows Communication Foundation unifies a broad array of distributed systems capabilities in a composable and extensible architecture, spanning transports, security systems, messaging patterns, encodings, network topologies, and hosting models.
Enabling Mission-Critical Business Processes.
 With .NET, developers can use Windows Workflow Foundation (WF) to model a business process with code, enabling closer collaboration between developers and business process owners, and providing end users with better access to data, thereby improving productivity.
Superior Reach Across Devices and Platforms.
 The .NET Framework enables developers to build solutions for a wide array of devices, from personal computers and servers to mobile phones and embedded devices. Silverlight, a runtime that contains a subset of the .NET Framework, helps developers expand their reach by providing a cross-browser, cross-platform, and cross-device plug-in for delivering the next generation of .NET-based media experiences, advertising and rich interactive applications (RIAs).



7.1.3 Principal design features


Interoperability
Because interaction between new and older applications is commonly required, the .NET Framework provides means to access functionality that is implemented in programs that execute outside the .NET environment. Access to COM components is provided in the System.Runtime.InteropServices and System.EnterpriseServices namespaces of the framework; access to other functionality is provided using the P/Invoke feature.


Common Runtime Engine
The Common Language Runtime (CLR) is the virtual machine component of the .NET framework. All .NET programs execute under the supervision of the CLR, guaranteeing certain properties and behaviors in the areas of memory management, security, and exception handling.


Language Independence
The .NET Framework introduces a Common Type System, or CTS. The CTS specification defines all possible datatypes and programming constructs supported by the CLR and how they may or may not interact with each other. Because of this feature, the .NET Framework supports the exchange of instances of types between programs written in any of the .NET languages.


Base Class Library
The Base Class Library (BCL), part of the Framework Class Library (FCL), is a library of functionality available to all languages using the .NET Framework. The BCL provides classes which encapsulate a number of common functions, including file reading and writing, graphic rendering, database interaction and XML document manipulation.




Simplified Deployment
The .NET framework includes design features and tools that help manage the installation of computer software to ensure that it does not interfere with previously installed software, and that it conforms to security requirements.


Security
The design is meant to address some of the vulnerabilities, such as buffer overflows, that have been exploited by malicious software. Additionally, .NET provides a common security model for all applications.


Portability
The design of the .NET Framework allows it to theoretically be platform agnostic, and thus cross-platform compatible. That is, a program written to use the framework should run without change on any type of system for which the framework is implemented. Microsoft's commercial implementations of the framework cover Windows, Windows CE, and the Xbox 360.In addition, Microsoft submits the specifications for the Common Language Infrastructure (which includes the core class libraries, Common Type System, and the Common Intermediate Language), the C# language,  and the C++/CLI language to both ECMA and the ISO, making them available as open standards. This makes it possible for third parties to create compatible implementations of the framework and its languages on other platforms.




Assemblies


The intermediate CIL code is housed in .NET assemblies. As mandated by specification, assemblies are stored in the Portable Executable (PE) format, common on the Windows platform for all DLL and EXE files. The assembly consists of one or more files, one of which must contain the manifest, which has the metadata for the assembly. The complete name of an assembly (not to be confused with the filename on disk) contains its simple text name, version number, culture, and public key token. The public key token is a unique hash generated when the assembly is compiled, thus two assemblies with the same public key token are guaranteed to be identical from the point of view of the framework. A private key can also be specified known only to the creator of the assembly and can be used for strong naming and to guarantee that the assembly is from the same author when a new version of the assembly is compiled (required to add an assembly to the Global Assembly Cache).
Metadata


All CLI is self-describing through .NET metadata. The CLR checks the metadata to ensure that the correct method is called. Metadata is usually generated by language compilers but developers can create their own metadata through custom attributes. Metadata contains information about the assembly, and is also used to implement the reflective programming capabilities of .NET Framework.
Security


.NET has its own security mechanism with two general features: Code Access Security (CAS), and validation and verification. Code Access Security is based on evidence that is associated with a specific assembly. Typically the evidence is the source of the assembly (whether it is installed on the local


machine or has been downloaded from the intranet or Internet). Code Access Security uses evidence to determine the permissions granted to the code. Other code can demand that calling code is granted a specified permission. The demand causes the CLR to perform a call stack walk: every assembly of each method in the call stack is checked for the required permission; if any assembly is not granted the permission a security exception is thrown.
When an assembly is loaded the CLR performs various tests. Two such tests are validation and verification. During validation the CLR checks that the assembly contains valid metadata and CIL, and whether the internal tables are correct. Verification is not so exact. The verification mechanism checks to see if the code does anything that is 'unsafe'. The algorithm used is quite conservative; hence occasionally code that is 'safe' does not pass. Unsafe code will only be executed if the assembly has the 'skip verification' permission, which generally means code that is installed on the local machine.
.NET Framework uses appdomains as a mechanism for isolating code running in a process. Appdomains can be created and code loaded into or unloaded from them independent of other appdomains. This helps increase the fault tolerance of the application, as faults or crashes in one appdomain do not affect rest of the application. Appdomains can also be configured independently with different security privileges. This can help increase the security of the application by isolating potentially unsafe code. The developer, however, has to split the application into subdomains; it is not done by the CLR.




Class library


System. CodeDom
System. Collections
System. Diagnostics
System. Globalization
System. IO
System. Resources
System. Text
System. Text.RegularExpressions
The .NET Framework includes a set of standard class libraries. The class library is organized in a hierarchy of namespaces. Most of the built in APIs are part of either System.* or Microsoft.* namespaces. These class libraries implement a large number of common functions, such as file reading and writing, graphic rendering, database interaction, and XML document manipulation, among others. The .NET class libraries are available to all .NET languages. The .NET Framework class library is divided into two parts: the Base Class Library and the Framework Class Library.
The Base Class Library (BCL) includes a small subset of the entire class library and is the core set of classes that serve as the basic API of the Common Language Runtime. The classes in mscorlib.dll and some of the classes in System.dll and System.core.dll are considered to be a part of the BCL. The BCL classes are available in both .NET Framework as well as its alternative
implementations including .NET Compact Framework, Microsoft Silverlight and Mono.


The Framework Class Library (FCL) is a superset of the BCL classes and refers to the entire class library that ships with .NET Framework. It includes an expanded set of libraries, including WinForms, ADO.NET, ASP.NET, Language Integrated Query, Windows Presentation Foundation, Windows Communication Foundation among others. The FCL is much larger in scope than standard libraries for languages like C++, and comparable in scope to the standard libraries of Java.
Memory management


The .NET Framework CLR frees the developer from the burden of managing memory (allocating and freeing up when done); instead it does the memory management itself. To this end, the memory allocated to instantiations of .NET types (objects) is done contiguously from the managed heap, a pool of memory managed by the CLR. As long as there exists a reference to an object, which might be either a direct reference to an object or via a graph of objects, the object is considered to be in use by the CLR. When there is no reference to an object, and it cannot be reached or used, it becomes garbage. However, it still holds on to the memory allocated to it. .NET Framework includes a garbage collector which runs periodically, on a separate thread from the application's thread, that enumerates all the unusable objects and reclaims the memory allocated to them.
The .NET Garbage Collector (GC) is a non-deterministic, compacting, mark-and-sweep garbage collector. The GC runs only when a certain amount of memory has been used or there is enough pressure for memory on the system. Since it is not guaranteed when the conditions to reclaim memory are reached, the GC runs are non-deterministic. Each .NET application has a set of roots, which are pointers to objects on the managed heap (managed objects). These include references to static objects and objects defined as local variables or


method parameters currently in scope, as well as objects referred to by CPU registers.[11] When the GC runs, it pauses the application, and for each object referred to in the root, it recursively enumerates all the objects reachable from the root objects and marks them as reachable. It uses .NET metadata and reflection to discover the objects encapsulated by an object, and then recursively walk them. It then enumerates all the objects on the heap (which were initially allocated contiguously) using reflection. All objects not marked as reachable are garbage. This is the mark phase. Since the memory held by garbage is not of any consequence, it is considered free space. However, this leaves chunks of free space between objects which were initially contiguous. The objects are then compacted together, by using memcpy to copy them over to the free space to make them contiguous again. Any reference to an object invalidated by moving the object is updated to reflect the new location by the GC. The application is resumed after the garbage collection is over.
The GC used by .NET Framework is actually generational. Objects are assigned a generation; newly created objects belong to Generation 0. The objects that survive a garbage collection are tagged as Generation 1, and the Generation 1 objects that survive another collection are Generation 2 objects. The .NET Framework uses up to Generation 2 objects. Higher generation objects are garbage collected less frequently than lower generation objects. This helps increase the efficiency of garbage collection, as older objects tend to have a larger lifetime than newer objects.
Thus, by removing older (and thus more likely to survive a collection) objects from the scope of a collection run, fewer objects need to be checked and compacted.
Common Language Runtime


The Common Language Runtime (CLR) is a core component of Microsoft's .NET initiative. It is Microsoft's implementation of the Common Language Infrastructure (CLI) standard, which defines an execution environment for program code. The CLR runs a form of bytecode called the Common Intermediate Language (CIL, previously known as MSIL -- Microsoft Intermediate Language).
Developers using the CLR write code in a language such as C# or VB.NET. At compile time, a .NET compiler converts such code into CIL code. At runtime, the CLR's just-in-time compiler converts the CIL code into code native to the operating system. Alternatively, the CIL code can be compiled to native code in a separate step prior to runtime. This speeds up all later runs of the software as the CIL-to-native compilation is no longer necessary.


Although some other implementations of the Common Language Infrastructure run on non-Windows operating systems, Microsoft's implementation runs only on Microsoft Windows operating systems.
The CLR allows programmers to ignore many details of the specific CPU that will execute the program. It also provides other important services, including the following:
Memory management
Thread management
Exception handling
Garbage collection
Security
Common Type System Overview
            The Common Type System (CTS) is a standard that specifies how Type definitions and specific values of Types are represented in computer memory. It is intended to allow programs written in different programming languages to easily share information. As used in programming languages, a Type can be described as a definition of a set of values (for example, "all integers between 0 and 10"), and the allowable operations on those values (for example, addition and subtraction).
Functions of the Common Type System


To establish a framework that helps enable cross-language integration, type safety, and high performance code execution.
To provide an object-oriented model that supports the complete implementation of many programming languages.


To define rules that languages must follow, which helps ensure that objects written in different languages can interact with each other.
The CTS also defines the rules that ensures that the data types of objects written in various languages are able to interact with each other.
Classification of Types
The common type system supports two general categories of types, each of which is further divided into subcategories:
•         Value types
Value types directly contain their data, and instances of value types are either allocated on the stack or allocated inline in a structure. Value types can be built-in (implemented by the runtime), user-defined, or enumerations. For a list of built-in value types, see the .NET Framework Class Library.
•         Reference types
Reference types store a reference to the value's memory address, and are allocated on the heap. Reference types can be self-describing types, pointer types, or interface types. The type of a reference type can be determined from values of self-describing types. Self-describing types are further split into arrays and class types. The class types are user-defined classes, boxed value types, and delegates.


Variables that are value types each have their own copy of the data, and therefore operations on one variable do not affect other variables. Variables that are reference types can refer to the same object; therefore, operations on one variable can affect the same object referred to by another variable. All types derive from the System..::.Object base type.
7.2 C#(c Sharp)
C# is intended to be a simple, modern, general-purpose, object-oriented programming language.
The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important.
The language is intended for use in developing software components suitable for deployment in distributed environments.
Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.
Support for internationalization is very important.
C# is intended to be suitable for writing applications for both hosted and embedded systems,      ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions.
Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language.
During the development of .NET Framework, the class libraries were originally written in a language/compiler called Simple Managed C (SMC). In January 1999, Anders Hejlsberg formed a team to build a new language at the time called Cool, which stood for "C like Object Oriented Language".Microsoft had considered keeping the name "Cool" as the final name of the language, but chose not to do so for trademark reasons. By the time the .NET project was publicly announced at the July 2000 Professional Developers Conference, the


language had been renamed C#, and the class libraries and ASP.NET runtime had been ported to C#.
C#'s principal designer and lead architect at Microsoft is Anders Hejlsberg, who was previously involved with the design of Turbo Pascal, CodeGear Delphi (formerly Borland Delphi), and Visual J++. In interviews and technical papers he has stated that flaws in most major programming languages (e.g. C++, Java, Delphi, and Smalltalk) drove the fundamentals of the Common Language Runtime (CLR), which, in turn, drove the design of the C# programming language itself.
7.2.1 Features


By design, C# is the programming language that most directly reflects the underlying Common Language Infrastructure (CLI). Most of its intrinsic types correspond to value-types implemented by the CLI framework. However, the language specification does not state the code generation requirements of the compiler: that is, it does not state that a C# compiler must target a Common Language Runtime, or generate Common Intermediate Language (CIL), or generate any other specific format. Theoretically, a C# compiler could generate machine code like traditional compilers of C++ or FORTRAN. In practice, all existing compiler implementations target CIL.
Some notable C# distinguishing features are:
There are no global variables or functions. All methods and members must be declared within classes. Static members of public classes can substitute for global variables and functions.
Local variables cannot shadow variables of the enclosing block, unlike C and C++. Variable shadowing is often considered confusing by C++ texts.


C# supports a strict Boolean datatype, bool. Statements that take conditions, such as while and if, require an expression of a boolean type. While C++ also has a boolean type, it can be freely converted to and from integers, and expressions such as if(a) require only that a is convertible to bool, allowing a to be an int, or a pointer. C# disallows this "integer meaning true or false" approach on the grounds that forcing programmers to use expressions that return exactly bool can prevent certain types of programming mistakes such as if (a = b) (use of = instead of ==).
In C#, memory address pointers can only be used within blocks specifically marked as unsafe, and programs with unsafe code need appropriate permissions to run. Most object access is done through safe object references, which always either point to a "live" object or have the well-defined null value; it is impossible to obtain a reference to a "dead" object      (one which has been garbage collected), or to random block of memory. An unsafe pointer can point to an instance of a value-type, array, string, or a block of memory allocated on a stack. Code that is not marked as unsafe can still store and manipulate pointers through the System.IntPtr type, but it cannot dereference them.
Managed memory cannot be explicitly freed; instead, it is automatically garbage collected. Garbage collection addresses memory leaks by freeing the programmer of responsibility for releasing memory which is no longer needed. C# also provides direct support for deterministic finalization with the using statement (supporting the Resource Acquisition Is Initialization idiom).
Multiple inheritance is not supported, although a class can implement any number of interfaces. This was a design decision by the language's lead architect to avoid complication, avoid dependency hell and simplify architectural requirements throughout CLI.




C# is more typesafe than C++. The only implicit conversions by default are those which are      considered safe, such as widening of integers and conversion from a derived type to a base type. This is enforced at compile-time, during JIT, and, in some cases, at runtime. There are no implicit conversions between booleans and integers, nor between enumeration members and integers (except for literal 0, which can be implicitly converted to any enumerated type). Any user-defined conversion must be explicitly marked as explicit or implicit, unlike C++ copy constructors and conversion operators, which are both implicit by default.
Enumeration members are placed in their own scope.
C# provides properties as syntactic sugar for a common pattern in which a pair of methods, accessor (getter) and mutator (setter) encapsulate operations on a single attribute of a class.
Full type reflection and discovery is available.
C# currently (as of 3 June 2008) has 77 reserved words.
7.3 ASP.NET
ASP.NET is a web application framework developed and marketed by Microsoft to allow programmers to build dynamic web sites, web applications and web services. It was first released in January 2002 with version 1.0 of the .NET Framework, and is the successor to Microsoft's Active Server Pages (ASP) technology. ASP.NET is built on the Common Language Runtime (CLR), allowing programmers to write ASP.NET code using any supported .NET language.




FEATURES OF NET FRAMEWORK
ü  Increased Performance
ü  Developer Productivity
ü  Powerful Security
ü  Integrate with Existing Systems
ü   Easy Deployment
ü  Mobility Support
ü  XML Web Service Support
ü  Support for over 20 programming languages
ü  Flexible data Access


7.4 Microsoft SQL Server
Microsoft SQL Server is a relational model database server produced by Microsoft. Its primary query languages are T-SQL and ANSI SQL.
7.4.1 Architecture


Protocol layer


Protocol layer implements the external interface to SQL Server. All operations that can be invoked on SQL Server are communicated to it via a Microsoft-defined format, called Tabular Data Stream (TDS). TDS is an application layer protocol, used to transfer data between a database server and a client. Initially designed and developed by Sybase Inc. for their Sybase SQL Server relational database engine in 1984, and later by Microsoft in Microsoft SQL Server, TDS packets can be encased in other physical transport dependent protocols, including TCP/IP, Named pipes, and Shared memory



Data storage


The main unit of data storage is a database, which is a collection of tables with typed columns. SQL Server supports different data types, including primary types such as Integer, Float, Decimal, Char (including character strings), Varchar (variable length character strings), binary (for unstructured blobs of data), Text (for textual data) among others. It also allows user-defined composite types (UDTs) to be defined and used. SQL Server also makes server statistics available as virtual tables and views (called Dynamic Management Views or DMVs). A database can also contain other objects including views, stored procedures, indexes and constraints, in addition to tables, along with a transaction log. A SQL Server database can contain a maximum of 231 objects, and can span multiple OS-level files with a maximum file size of 220 TB. The data in the database are stored in primary data files with an extension .mdf. Secondary data files, identified with an .ndf extension, are used to store optional metadata. Log files are identified with the .ldf extension.
Storage space allocated to a database is divided into sequentially numbered pages, each 8 KB in size. A page is the basic unit of I/O for SQL Server operations. A page is marked with a 96-byte header which stores metadata about the page including the page number, page type, free space on the page and the ID of the object that owns it. Page type defines the data contained in the page - data stored in the database, index, allocation map which holds information about how pages are allocated to tables and indexes, change map which holds information about the changes made to other pages since last backup or logging, or contain large data types such as image or text. While page is the basic unit of an I/O operation, space is actually managed in terms of an extent which consists of 8 pages.




For physical storage of a table, its rows are divided into a series of partitions (numbered 1 to n). The partition size is user defined; by default all rows are in a single partition. A table is split into multiple partitions in order to spread a database over a cluster. Rows in each partition are stored in either B-tree or heap structure. If the table has an associated index to allow fast retrieval of rows, the rows are stored in-order according to their index values, with a B-tree providing the index. The data is in the leaf node of the leaves, and other nodes storing the index values for the leaf data reachable from the respective nodes. If the index is non-clustered, the rows are not sorted according to the index keys. An indexed view has the same storage structure as an indexed table. A table without an index is stored in an unordered heap structure. Both heaps and B-trees can span multiple allocation units.
Buffer management


SQL Server buffers pages in RAM to minimize disc I/O. Any 8 KB page can be buffered in-memory, and the set of all pages currently buffered is called the buffer cache. The amount of memory available to SQL Server decides how many pages will be cached in memory. The buffer cache is managed by the Buffer Manager. Either reading from or writing to any page copies it to the buffer cache. Subsequent reads or writes are redirected to the in-memory copy, rather than the on-disc version. The page is updated on the disc by the Buffer Manager only if the in-memory cache has not been referenced for some time. While writing pages back to disc, asynchronous I/O is used whereby the I/O operation is done in a background thread so that other operations do not have to wait for the I/O operation to complete. Each page is written along with its checksum when it is written. When reading the page back, its checksum is computed again and matched with the stored version to ensure the page has not been damaged or tampered with in the meantime.
Logging and Transaction


SQL Server ensures that any change to the data is ACID-compliant, i.e., it uses transactions to ensure that any operation either totally completes or is undone if fails, but never leaves the database in an intermediate state. Using transactions, a sequence of actions can be grouped together, with the guarantee that either all actions will succeed or none will. SQL Server implements transactions using a write-ahead log. Any changes made to any page will update the in-memory cache of the page, simultaneously all the operations performed will be written to a log, along with the transaction ID which the operation was a part of. Each log entry is identified by an increasing Log Sequence Number (LSN) which ensure that no event overwrites another. SQL Server ensures that the log will be written onto the disc before the actual page is written back. This enables SQL Server to ensure integrity of the data, even if the system fails. If both the log and the page were written before the failure, the entire data is on persistent storage and integrity is ensured. If only the log was written (the page was either not written or not written completely), then the actions can be read from the log and repeated to restore integrity. If the log wasn't written then integrity is also maintained although the database state remains unchanged as if the transaction never occurred. If it was only partially written, then the actions associated with the unfinished transaction are discarded. Since the log was only partially written, the page is guaranteed to have not been written, again ensuring data integrity. Removing the unfinished log entries effectively undoes the transaction. SQL Server ensures consistency between the log and the data every time an instance is restarted.
Concurrency and locking


SQL Server allows multiple clients to use the same database concurrently. As such, it needs to control concurrent access to shared data, to ensure data integrity - when multiple clients update the same data, or clients attempt to read data that is in the process of being changed by another client. SQL Server provides two modes of concurrency control: pessimistic concurrency and optimistic concurrency. When pessimistic concurrency control is being used, SQL Server controls concurrent access by using locks. Locks can be either shared or exclusive. Exclusive lock grants the user exclusive access to the data - no other user can access the data as long as the lock is held. Shared locks are used when some data is being read - multiple users can read from data locked with a shared lock, but not acquire an exclusive lock. The latter would have to wait for all shared locks to be released. Locks can be applied on different levels of granularity - on entire tables, pages, or even on a per-row basis on tables. For indexes, it can either be on the entire index or on index leaves. The level of granularity to be used is defined on a per-database basis by the database administrator. While a fine grained locking system allows more users to use the table or index simultaneously, it requires more resources. So it does not automatically turn into higher performing solution. SQL Server also includes two more lightweight mutual exclusion solutions - latches and spinlocks - which are less robust than locks but are less resource intensive. SQL Server uses them for DMVs and other resources that are usually not busy. SQL Server also monitors all worker threads that acquire locks to ensure that they do not end up in deadlocks - in case they do, SQL Server takes remedial measures, which in many cases is to kill one of the threads entangled in a deadlock and rollback the transaction it started. To implement locking, SQL Server contains the Lock Manager. The Lock Manager maintains an in-memory table that manages the database objects and locks, if any, on them along with other metadata about the


lock. Access to any shared object is mediated by the lock manager, which either grants access to the resource or blocks it.
SQL Server also provides the optimistic concurrency control mechanism, which is similar to the multiversion concurrency control used in other databases. The mechanism allows a new version of a row to be created whenever the row is updated, as opposed to overwriting the row, i.e., a row is additionally identified by the ID of the transaction that created the version of the row. Both the old as well as the new versions of the row are stored and maintained, though the old versions are moved out of the database into a system database identified as Tempdb. When a row is in the process of being updated, any other requests are not blocked (unlike locking) but are executed on the older version of the row. If the other request is an update statement, it will result in two different versions of the rows - both of them will be stored by the database, identified by their respective transaction IDs.
Data retrieval


The main mode of retrieving data from an SQL Server database is querying for it. The query is expressed using a variant of SQL called T-SQL, a dialect Microsoft SQL Server shares with Sybase SQL Server due to its legacy. The query declaratively specifies what is to be retrieved. It is processed by the query processor, which figures out the sequence of steps that will be necessary to retrieve the requested data. The sequence of actions necessary to execute a query is called a query plan. There might be multiple ways to process the same query. For example, for a query that contains a join statement and a select statement, executing join on both the tables and then executing select on the results would give the same result as selecting from each table and then executing the join, but result in different execution plans.




In such case, SQL Server chooses the plan that is supposed to yield the results in the shortest possible time. This is called query optimization and is performed by the query processor itself.
SQL Server includes a cost-based query optimizer which tries to optimize on the cost, in terms of the resources it will take to execute the query. Given a query, the query optimizer looks at the database schema, the database statistics and the system load at that time. It then decides which sequence to access the tables referred in the query, which sequence to execute the operations and what access method to be used to access the tables. For example, if the table has an associated index, whether the index should be used or not - if the index is on a column which is not unique for most of the columns (low "selectivity"), it might not be worthwhile to use the index to access the data. Finally, it decides whether to execute the query concurrently or not. While a concurrent execution is more costly in terms of total processor time, because the execution is actually split to different processors might mean it will execute faster. Once a query plan is generated for a query, it is temporarily cached. For further invocations of the same query, the cached plan is used. Unused plans are discarded after some time.
SQL Server also allows stored procedures to be defined. Stored procedures are parameterized T-SQL queries, that are stored in the server itself (and not issued by the client application as is the case with general queries). Stored procedures can accept values sent by the client as input parameters, and send back results as output parameters. They can call defined functions, but not other stored procedures. They can be selectively provided access to. Unlike other queries, stored procedures have an associated name, which is used at runtime to resolve into the actual queries. Also because the code need not be sent from the client every time (as it can be accessed by name), it reduces


network traffic and somewhat improves performance. Execution plans for stored procedures are also cached as necessary.
SQL CLR


Microsoft SQL Server 2005 includes a component named SQL CLR via which it integrates with .NET Framework. Unlike most other applications that use .NET Framework, SQL Server itself hosts the .NET Framework runtime, i.e., memory, threading and resource management requirements of .NET Framework are satisfied by SQLOS itself, rather than the underlying Windows operating system. SQLOS provides deadlock detection and resolution services for .NET code as well. With SQL CLR, stored procedures and triggers can be written in any managed .NET language, including C# and VB.NET. Managed code can also be used to define UDT's (user defined types), which can persist in the database. Managed code is compiled to .NET assemblies and after being verified for type safety, registered at the database. After that, they can be invoked like any other procedure. However, only a subset of the Base Class Library is available, when running code under SQL CLR. Most APIs relating to user interface functionality are not available.
When writing code for SQL CLR, data stored in SQL Server databases can be accessed using the ADO.NET APIs like any other managed application that accesses SQL Server data. However, doing that creates a new database session, different from the one in which the code is executing. To avoid this, SQL Server provides some enhancements to the ADO.NET provider that allows the connection to be redirected to the same session which already hosts the running code. Such connections are called context connections and are set by setting context connection parameter to true in the connection string.


SQL Server also provides several other enhancements to the ADO.NET API, including classes to work with tabular data or a single row of data as well as classes to work with internal metadata about the data stored in the database. It also provides access to the XML features in SQL Server, including XQuery support. These enhancements are also available in T-SQL Procedures in consequence of the introduction of the new XML Datatype (query,value,nodes functions). 
 Services


SQL Server also includes an assortment of add-on services. While these are not essential for the operation of the database system, these provide value added services on top of the core database management system. These services either run as a part of some SQL Server component or out-of-process as Windows Service and presents their own API to control and interact with them.
Service Broker


The Service Broker, which runs as a part of the database engine, provides a reliable messaging and message queuing platform for SQL Server applications. Used inside an instance, it is used to provide an asynchronous programming environment. For cross instance applications, Service Broker communicates over TCP/IP and allows the different components to be synchronized together, via exchange of messages.
Replication Services


SQL Server Replication Services are used by SQL Server to replicate and synchronize database objects, either in entirety or a subset of the objects present, across replication agents, which might be other database servers across the network, or database caches on the client side. Replication follows a publisher/subscriber model, i.e., the changes are sent out by one database server ("publisher") and are received by others ("subscribers"). SQL Server supports three different types of replication:


Transaction replication
Each transaction made to the publisher database (master database) is synced out to subscribers, who update their databases with the transaction. Transactional replication synchronizes databases in near real time.


Merge replication
Changes made at both the publisher and subscriber databases are tracked, and periodically the changes are synchronized bi-directionally between the publisher and the subscribers. If the same data has been modified differently in both the publisher and the subscriber databases, synchronization will result in a conflict which has to be resolved - either manually or by using pre-defined policies.


Snapshot replication
Snapshot replication published a copy of the entire database (the then-snapshot of the data) and replicates out to the subscribers. Further changes to the snapshot are not tracked.
Analysis Services


SQL Server Analysis Services adds OLAP and data mining capabilities for SQL Server databases. The OLAP engine supports MOLAP, ROLAP and HOLAP storage modes for data. Analysis Services supports the XML for Analysis standard as the underlying communication protocol. The cube data can be accessed using MDX queries. Data mining specific functionality is exposed via the DMX query language. Analysis Services includes various algorithms - Decision trees, clustering algorithm, Naive Bayes algorithm, time series analysis, sequence clustering algorithm, linear and logistic regression analysis, and neural networks - for use in data mining.
Reporting Services


SQL Server Reporting Services is a report generation environment for data gathered from SQL Server databases. It is administered via a web interface. Reporting services features a web services interface to support the development of custom reporting applications. Reports are created as RDL files.
Reports can be designed using recent versions of Microsoft Visual Studio (including Visual Studio.NET 2003 onwards) with Business Intelligence Development Studio, installed or with the included Report Builder. Once created, RDL files can be rendered in a variety of formats including Excel, PDF, CSV, XML, TIFF (and other image formats), and HTML Web Archive.
Notification Services


Originally introduced as a post-release add-on for SQL Server 2000, Notification Services was bundled as part of the Microsoft SQL Server platform for the first and only time with SQL Server 2005. with Sql Server 2005, SQL Server Notification Services is a mechanism for generating data-driven notifications, which are sent to Notification Services subscribers. A subscriber registers for a specific event or transaction (which is registered on the database server as a trigger); when the event occurs, Notification Services can use one of three methods to send a message to the subscriber informing about the occurrence of the event. These methods include SMTP, SOAP, or by writing to a file in the filesystem.
Integration Services


SQL Server Integration Services is used to integrate data from different data sources. It is used for the ETL capabilities for SQL Server for data warehousing needs. Integration Services includes GUI tools to build data extraction workflows integration various functionality such as extracting data from various sources, querying data, transforming data including aggregating, duplication and merging data, and then loading the transformed data onto other sources, or sending e-mails detailing the status of the operation as defined by the user.  
                 Relational database are the most important database system used in the industry today. One of the most understanding database systems is MS SQL SERVER. SQL server is database management system developed and marketed by Microsoft. It runs exclusively under Windows NT, Windows 95/98.


The most important aspects are SQL Server


ü  SQL Server is easy to use.
ü  SQL Server scales from a mobile laptop to symmetric multiprocessor system.
ü  SQL Server provides data warehousing features that will now have only bean available in Oracle and other expensive database.


                        A data base system is an over all collection of different         database software components and database containing the parts via, database application programs, front components, DBMS and databases.
The Database system must provide the following features:
ü  A variety of user interfaces.
ü  Physical data independence.
ü  Logical data independence.
ü  Query optimization.
ü  Data integrity.
ü  Concurrency control.
ü  Backup and recovery.
ü  Security and authorization.


                    SQL Server is the relational DBMS. The SQL server relational languages are called Transact-SQL. SQL is asset oriented language. This means that SQL can query many rows from one or more tables using just one statement.
               
  This feature allows the use of this language at logically higher level than procedure language. Another important property of SQL is Non-Procedurally. SQL contains two sub languages DDL and DML.
                   
SQL Server Administrators primary tool for interacting with the system is enterprise manager. The enterprise manager has two main processes. Administration of database server and management of database objects. SQL Server query analyzer provides a graphical presentation of execution plan of a query and an automatic component that suggests which index should be used for a selected query. This interactive component of SQL Server performs the task likes:


ü  Generating and executing Transact-SQL statements.
ü  Storing the generated transact SQL statements in a file.
ü  Analyzing execution plans for generated queries.
ü  Graphically illustrating the execution plan for this selected query.

No comments:

Post a Comment

leave your opinion