Autonomic computing is the technology that is building self-managing IT infrastructures—hardware and software that can configure, heal, optimize, and protect itself. By taking care of many of the increasingly complex management requirements of IT systems, autonomic computing allows human and physical resources to concentrate on actual business issues.
The term autonomic computing derives from the body's autonomic nervous system, controlling functions like heart rate, breathing rate, and oxygen levels without a person's conscious awareness or involvement.
The goal is to realize the promise of IT: increasing productivity while minimizing complexity for users. We are pursuing this goal on many technological fronts as we actively develop computing systems capable of running themselves with minimal human intervention.
Complicated tasks associated with the ongoing maintenance and management of computing systems, autonomic computing technology will allow IT workers to focus their talents on complex, big-picture projects that require a higher level of thinking and planning. This is the ultimate benefit of autonomic computing: freeing IT professionals to drive creativity, innovation, and opportunity.
Autonomic systems are being created in this manner to recognize external threats or internal problems and then take measures to automatically prevent or correct those issues before humans even know there is a problem. These systems are also being designed to manage and proactively improve their own performance, all of which frees IT staff to focus their real intelligence on big-picture projects.
The high-tech industry has spent decades creating computer systems with
ever mounting degrees of complexity to solve a wide variety of business problems. Ironically, complexity itself has become part of the problem. As networks and distributed systems grow and change, they can become increasingly hampered by system deployment failures, hardware and software issues, not to mention human error. Such scenarios in turn require further human intervention to enhance the performance and capacity of IT components. This drives up the overall IT costs—even though technology component costs continue to decline. As a result, many IT professionals seek ways to improve their return on investment in their IT infrastructure, by reducing the total cost of ownership of their environments while improving the quality of service for users.Self managing computing helps address the complexity issues by using technology to manage itself. Self managing computing is also known as autonomic computing.
Autonomic - Pertaining to an on demand operating environment that responds automatically to problems, security threats, and system failures. Autonomic computing - A computing environment with the ability to manage itself and dynamically adapt to change in accordance with business policies and objectives. Self-managing environments can perform such activities based on situations they observe or sense in the IT environment rather than requiring IT professionals to initiate the task. These environments are self-configuring, self-healing, self-optimizing, and self-protecting.
^ The promise of Autonomic Computing includes capabilities unknown in traditional products and toolsets. It includes the capacity not just to take automated action, but to do so based on an innate ability to sense and respond to change. Not just to execute rules but to continually normalize and optimize environments in real time. Not just to store and execute policies, but to incorporate self-learning and self-managing capabilities. It is a landscape that eases the pain of taking IT into the future, by shifting mundane work to technology and freeing up humans for work that more directly impacts business value.
2.What is autonomic computing
Autonomic computing is about freeing IT professionals to focus on high-value tasks by making technology work smarter. This means letting computing systems and infrastructure take care of managing themselves. Ultimately, it is writing business policies and goals and letting the infrastructure configure, heal and optimize itself according to those policies while protecting itself from malicious activities. Self managing computing systems have the ability to manage themselves and dynamically adapt to change in accordance with business policies and objectives.
In an autonomic environment the IT infrastructure and its
components are Self-managing. Systems with self-managing components reduce the cost of owning and operating computer systems. Self-managing systems can perform management activities based on situations they observe or sense in the IT environment. Rather than IT professionals initiating management activities, the system observes something about itself and acts accordingly. This allows the IT professional to focus on high-value tasks while the technology manages the more mundane operations. IT infrastructure components take on the following characteristics: self-configuring, self-healing, self-optimizing and self-protecting.
2.1.Self-management attributes of system components
In a self-managing autonomic environment, system components—from hardware (such as storage units, desktop computers and servers) to software (such as operating systems, middleware and business applications)—can include embedded control loop functionality. Although these control loops consist of the same fundamental parts, their functions can be divided into four broad embedded control loop categories. These categories are considered to be attributes of the system components and are defined as:
Self-configuring
Systems adapt automatically to dynamically changing environments. When hardware and software systems have the ability to define themselves "on-the fly," they are self-configuring. This aspect of self-managing means that new features, software, and servers can be dynamically added to the enterprise infrastructure with no disruption of services. Self-configuring not only includes the ability for each individual system to configure itself on the fly, but also for systems within the enterprise to configure themselves into the ebusiness infrastructure of the enterprise. The goal of self managing computing is to provide self-configuration capabilities for the entire IT infrastructure, not just individual servers, software, and storage devices.
Self- healing
Systems discover, diagnose, and react to disruptions. For a system to be self-healing, it must be able to recover from a failed component by first detecting and isolating the failed component, taking it off line, fixing or isolating the failed component, and reintroducing the fixed or replacement component into service without any apparent application disruption. Systems will need to predict problems and take actions to prevent the failure from having an impact on applications. The self-healing objective must be to rninimize all outages in order to keep enterprise applications up and available at all times. Developers of system components need to focus on maximizing the reliability and availability design of each hardware and software product toward continuous availability.
Self-optim izing.
Systems monitor and tune resources automatically. Self-optimization requires hardware and software systems to efficiently maximize resource utilization to meet end-user needs without human intervention. Features must be introduced to allow the enterprise to optimize resource usage across the collection of systems within their infrastructure, while also maintaining their flexibility to meet the ever-changing needs of the enterprise.
Self-protecting
Systems anticipate, detect, identify, and protect themselves from attacks from anywhere. Self-protecting systems must have the ability to define and manage user access to all computing resources within the enterprise, to protect against unauthorized resource access, to detect intrusions and report and prevent these activities as they occur, and to provide backup and recovery capabilities that are as secure as the original resource management systems. Systems will need to build on top of a number of core security technologies already available today. Capabilities must be provided to more easily understand and handle user identities in various contexts, removing the burden from administrators.
2.2.C»mparison with the present system
IBM frequently cites four aspects of self-management, which Table 1 summarizes. Early autonomic systems may treat these aspects as distinct, with different product teams creating solutions that address each one separately. Ultimately, these aspects will be emergent properties of a general architecture, and distinctions will blur into a more general notion of self-maintenance. The four aspects of self management such as self-configuring, self-healing, self-optimizing and self-protecting are compared here .
Table 1.Four aspects of self management as they are now and would be with autonomic computing
Concept Current computing Autonomic computing
Self configuration | Corporate centers have multiple vendors and platforms. Installing, configuring, and integrating systems is time consuming and error prone. | Automated configuration of components and systems follows high-level policies. Rest of the system adjusts automatically and seamlessly. |
Self-optimization | Systems have hundreds of manually set ,nonlinear tuning parameters and their number increases with each release. | Components and systems continually seek opportunities to improve their own performance and efficiency. |
Self-healing | Problem determination in large, complex systems can take a team of programmers weeks. | System automatically detects, diagnoses and repairs localized software and hardware problems. |
Self-protection | Detection of and recovery from attacks and cascading failures is manual. | System automatically defends against malicious attacks or cascading failures. It uses early warning to anticipate and prevent system wide failures. |
2.3.Eight key elements
Knows Itself
An autonomic computing system needs to "know itself - its components must also possess a system identity. Since a "system" can exist at many levels, an autonomic system will need detailed knowledge of its components, current status, ultimate capacity, and all connections to other systems to govern itself. It will need to know the extent of its "owned" resources, those it can borrow or lend, and those that can be shared or should be isolated.
Configure Itself
An autonomic computing system must configure and reconfigure itself under varying (and in the future, even unpredictable) conditions. System configuration or "setup" must occur automatically, as well as dynamic adjustments to that configuration to best handle changing environments
Optimizes Itself
An autonomic computing system never settles for the status quo - it always looks for ways to optimize its workings. It will monitor its constituent parts and fine-tune workflow to achieve predetermined system goals.
Heal Itself
An autonomic computing system must perform something akin to healing - it must be able to recover from routine and extraordinary events that might cause some of its parts to malfunction. It must be able to discover problems or potential problems, then find an alternate way of using resources or reconfiguring the system to keep functioning smoothly.
Protect Itself
A virtual world is no less dangerous than the physical one, so an autonomic computing system must be an expert in self-protection. It must detect, identify and protect itself against various types of attacks to maintain overall system security and integrity.
Adapt Itself
An autonomic computing system must know its environment and the context surrounding its activity, and act accordingly. It will find and generate rules for how best to interact with neighboring systems. It will tap available resources, even negotiate the use by other systems of its underutilized elements, changing both itself and its environment in the process — in a word, adapting.
Open. Itself
An autonomic computing system cannot exist in a hermetic environment. While independent in its ability to manage itself, it must function in a heterogeneous world and implement open standards — in other words, an autonomic computing system cannot, by definition, be a proprietary solution. Hide Itself
An autonomic computing system will anticipate the optimized resources needed while keeping its complexity hidden. It must marshal I/T resources to shrink the gap between the business or personal goals of the user, and the I/T implementation necessary to achieve those goals — without involving the user in that implementation
2.4.Autonomic deployment model
Delivering system wide autonomic environments is an evolutionary process enabled by technology, but it is ultimately implemented by each enterprise through the adoption of these technologies and supporting processes. The path to self managing computing can be thought of in five levels. These levels, defined below, start at basic and continue through managed, predictive, adaptive and finally autonomic.
1. Basic level—
A starting point of IT environment. Each infrastructure element is managed independently by IT professionals who set it up, monitor it and eventually replace it.
2. Managed level
Systems management technologies can be used to collect information from disparate systems onto fewer consoles, reducing the time it takes for the administrator to collect and synthesize information as the IT environment becomes more complex.
3. Predictive level
New technologies are introduced to provide correlation among several infrastructure elements. These elements can begin to recognize patterns, predict the optimal configuration and provide advice on what course of action the administrator should take.
4. Adaptive level
As these technologies improve and as people become more comfortable with the advice and predictive power of these systems, we can progress to the
adaptive level, where the systems themselves can automatically take the right actions based on the information that is available to them and the knowledge of what is happening in the system. 5. Autonomic level
The IT infrastructure operation is governed by business policies and objectives. Users interact with the autonomic technology to monitor the business processes, alter the objectives, or both.
3. Architectural details
Autonomic computing system
Here organizes an autonomic computing system into the layers and parts shown in Figure 1. These parts are connected using enterprise service bus patterns that allow the components to collaborate using standard mechanisms such as Web services. The enterprise service bus integrates the various building blocks, which include:
• Touchpoints for managed resources
• Knowledge sources
• Autonomic managers
• Manual managers
• Enterprise service bus
The lowest layer contains the system components, or managed resources, that make up the IT infrastructure. These managed resources can be any type of resource (hardware or software) and may have embedded self-managing attributes. The next layer incorporates consistent, standard manageability interfaces for accessing and controlling the managed resources. These standard interfaces are delivered through a touchpoint. Layers three and four automate some portion of the IT process using an autonomic manager.
A particular resource may have one or more touchpoint autonomic managers, each implementing a relevant control loop. Layer 3 in above Figure illustrates this by depicting an autonomic manager for the four broad categories that were introduced earlier (self-configuring, self-healing, self-optimizing and self-protecting). Layer four contains autonomic managers that orchestrate other .autonomic managers. It is these orchestrating autonomic managers that deliver the system wide autonomic capability by incorporating control loops that have the broadest view of the overall IT infrastructure. The top layer illustrates a manual manager that provides a common system management interface for the IT professional using an integrated solutions console. The various manual and autonomic manager layers can obtain and share knowledge via knowledge sources.
Managed resource
A managed resource is a hardware or software component that can be managed. A managed resource could be a server, storage unit, database, application server, service, application or other entity. A managed resource might contain its own embedded self-management control loop, in addition to other autonomic managers that might be packaged with a managed resource.Intelligent control loops can be embedded in the run-time environment of a managed resource. These embedded control loops are one way to offer self-managing autonomic capability. The details of these embedded control loops may or may not be externally visible. The control loop might be deeply embedded in a resource so that it is not visible through the manageability interface. When any of the details for the control loop are visible, the control loop is configured through the manageability interface that
is provided for that resource (for example, a disk drive).
Touchpoints
A touchpoint is an autonomic computing system building block that implements sensor and effector behavior for one or more of a managed resource's manageability mechanisms. It also provides a standard manageability interface. Deployed managed resources are accessed and controlled through these manageability interfaces. Manageability interfaces employ mechanisms such as log files, events, commands, application programming interfaces (APIs) and configuration files. These mechanisms provide various ways to gather details about and change the behavior of the managed resources.The mechanisms used to gather details are aggregated into a sensor for the managed resource and the mechanisms used to change the behavior of the managed resources are aggregated into an effector for the resource.
A touchpoint is the component in a system that exposes the state and
management operations for a resource in the system. An autonomic manager communicates with a touchpoint through the manageability interface. A touchpoint, depicted in Figure 2, is the implementation of the manageability interface for a specific manageable resource or a set of related manageable resources. For example, there might be a touchpoint implemented that exposes the manageability for a database server, the databases that database server hosts, and the tables within those databases.
Touchpoint autonomic managers
Autonomic managers implement intelligent control loops that automate combinations of the tasks found in IT processes. Touchpoint autonomic managers are those that work directly with the managed resources through their touchpoints. These autonomic managers can perform various self-management tasks, so they embody different intelligent control loops. Some examples of such control loops, using the four self-managing categories include:
• Performing a self-configuring task such as installing software when it detects
that some prerequisite software is missing
• Performing a self-healing task such as correcting a configured path so installed
software can be correctly located
• Performing a self-optimizing task such as adjusting the current workload when
it observes an increase or decrease in capacity
• Performing a self-protecting task such as taking resources offline if it detects
an intrusion attempt
Most autonomic managers use policies (goals or objectives) to govern the behavior of intelligent control loops. Touchpoint autonomic managers use these policies to determine what actions should be taken for the managed resources that they manage. A touchpoint autonomic manager can manage one or more managed resources directly, using the managed resource's touchpoint or touchpoints. Figure 3 illustrates four typical arrangements. The primary differences among these arrangements are the type and number of managed resources that are within the autonomic manager's scope of control. The four typical arrangements are:
• A single resource scope is the most fundamental because an autonomic manager implements a control loop that accesses and controls a single managed resource, such as a network router, a server, a storage device, an application, a middleware platform or a personal computer.
• A homogeneous group scope aggregates resources of the safiie type. An
example of a homogeneous group is a pool of servers that an autonomic manager can dynamically optimize to meet certain performance and availability thresholds.
• A heterogeneous group scope organizes resources of different types. An example of a heterogeneous group is a combination of heterogeneous devices and servers, such as databases, Web servers and storage subsystems that work together to achieve common performance and availability targets.
• A business system scope organizes a collection of heterogeneous resources so
an autonomic manager can apply its intelligent control loop to the service that is delivered to the business. Some examples are a customer care system or an electronic auction system. The business system scope requires autonomic managers that can comprehend the optimal state of business processes— based on policies, schedules and service levels—and drive the consequences of process optimization back down to the resource groups (both homogeneous and heterogeneous) and even to individual resources.
Fig 3 Tour common managed resource arrangement. |
These resource scopes define a set of decision-making contexts that are used to classify the purpose and role of a control loop within the autonomic computing architecture. The touchpoint autonomic managers shown previously in Figure 1 are each dedicated to a particular resource or a particular collection of resources. Touchpoint autonomic managers also expose a sensor and an effector, just like the managed resources in Figure 3 do. As a result, orchestrating autonomic managers can interact with touchpoint autonomic managers by using the same style of standard interface that touchpoint autonomic managers use to interact with managed resources.
Orchestrating autonomic managers
A single touchpoint autonomic manager acting in isolation can achieve autonomic behavior only for the resources that it manages. The self-managing autonomic capabilities delivered by touchpoint autonomic managers need to be coordinated to deliver system wide autonomic computing behavior. Orchestrating autonomic managers provide this coordination function. There are two common configurations:
• Orchestrating within a discipline-An orchestrating autonomic manager coordinates multiple touchpoint autonomic managers of the same type (one of self-configuring, self-healing, self-optimizing or self-protecting).
• Orchestrating across disciplines-An orchestrating autonomic manager coordinates touchpoint autonomic managers that are a mixture of self-configuring, self-healing, self-optimizing and self-protecting.
An example of an orchestrating autonomic manager is a workload manager. An autonomic management system for workload might include self-optimizing touchpoint autonomic managers for particular resources, as well as orchestrating autonomic managers that manage pools of resources. A touchpoint autonomic manager can optimize the utilization of a particular resource based on application priorities. Orchestrating autonomic managers can optimize resource utilization across a pool of resources, based on transaction measurements and policies. The philosophy behind workload management is one of policy-based, goal-oriented management. Tuning servers individually using only touchpoint autonomic managers cannot ensure the overall performance of applications that span a mix of platforms. Systems that appear to be functioning well on their own may not, in fact, be contributing to optimal systemwide end-to-end processing. Manual Managers
A manual manager provides a common system management interface for the IT professional using an integrated solutions console. Self-managing autonomic systems can use common console technology to create a consistent human-facing interface for the autonomic managers of IT infrastructure components.A autonomic capabilities in computer systems perform tasks that IT professionals choose to delegate to the technology, according to policies. In some cases, an administrator might choose for certain tasks to involve human intervention, and the human interaction with the system can be enhanced using a common console framework, based on industry standards, that promotes consistent presentation to IT professionals. The primary goal of a common console is to provide a single platform that can host all the administrative console functions in server, software and storage products to allow users to manage solutions rather than managing individual components or products. Administrative console functions range from setup and configuration to solution run-time monitoring and control. The customer value of an integrated solutions console includes reduced cost of ownership— attributable to more efficient administration—and shorter learning curves as new products and solutions are added to the autonomic system environment. The shorter learning curve is achieved by using standards and a Web-based presentation style. By delivering a consistent presentation format and behavior for administrative functions across diverse products, the common console creates a familiar user interface, reducing the need for staff to learn a different interface each time a new product is introduced. The common console architecture is based on standards (such as standard Java APIs), so that it can be extended to offer new management functions or to enable the development of new components for products in an autonomic system.A common console instance consists of a framework and a set of console-specific components provided by products. Administrative activities are executed as portlets. Consistency of presentation and behavior is essential to improving administrative efficiency, and requires ongoing effort and cooperation among many product communities.A manual manager is an implementation of the user interface that enables an IT professional to perform some management function manually. The manual manager can collaborate with other autonomic managers at the same level or orchestrate autonomic managers and other IT professionals working at "lower" levels.
Autonomic manager
An autonomic manager is an implementation that automates some management function and externalizes this function according to the behavior defined by management interfaces. The autonomic manager is a component that implements the control loop. For a system component to be self-managing, it must have an automated method to collect the details it needs from the system; to analyze those details to determine if something needs to change; to create a plan, or sequence of actions, that specifies the necessary changes; and to perform those actions. When these functions can be automated, an intelligent control loop is formed.As shown in Figure 4, the architecture dissects the loop into four parts that share knowledge:
• The monitor function provides the mechanisms that collect, aggregate, filter
and report details (such as metrics and topologies) collected from a managed resource.
• The analyze function provides the mechanisms that correlate and model complex situations (for example, time-series forecasting and queuing models). These mechanisms allow the autonomic manager to learn about the IT environment and help predict future situations.
• The plan function provides the mechanisms that construct the actions needed to
achieve goals and objectives. The planning mechanism uses policy information to guide its work.
• The execute function provides the mechanisms that control the execution of a
plan with considerations for dynamic updates. These four parts work together to provide the control loop functionality. Figure 4 shows a structural arrangement of the parts rather than a control flow. The four parts communicate and collaborate with one another and exchange appropriate knowledge and data, as shown in Figure 4.As illustrated in Figure 4, autonomic managers, in a manner similar to
touchpoints, provide sensor and effector manageability interfaces for other autonomic managers and manual managers to use. Using standard sensor and effector interfaces enables these components to be composed together in a manner that is transparent to the managed resources. For example, an orchestrating
manageability interfaces of touchpoint autonomic managers to accomplish its management functions (that is, the orchestrating autonomic manager can manage touchpoint autonomic managers), as illustrated previously in Figure 2.
Even though an autonomic manager is capable of automating the monitor, analyze, plan and execute parts of the loop, partial autonomic managers that perform only a subset of the monitor, analyze, plan and execute functions can be developed, and IT professionals can configure an autonomic manager to perform only some of the automated functions it is capable of performing.
Fig 4:Functional details of autonomic manager
In Figure 4, four profiles (monitoring, analyzing, planning and executing) are shown. An administrator might configure this autonomic manager to perform only the monitoring function As a result, the autonomic manager would surface notifications to a common console for the situations that it recognizes, rather than automating the analysis, planning and execution functions associated with those actions. Other configurations could allow additional parts of the control loop to be automated. Autonomic managers that perform only certain parts of the control loop can be composed together to form a complete closed loop For example, one autonomic manager that performs only the monitor and analyze functions might collaborate with another autonomic manager that performs only the plan and execute functions to realize a complete autonomic control loop.
Autonomic manager internal structure
* Monitor
The monitor function collects the details from the managed resources, via touchpoints, and correlates them into symptoms that can be analyzed. The details can include topology information, metrics, configuration property settings and so on. This data includes information about managed resource configuration, status, offered capacity and throughput. Some of the data is static or changes slowly, whereas other data is dynamic, changing continuously through time. The monitor function aggregates, correlates and filters these details until it determines a symptom that needs to be analyzed. For example, the monitor function could aggregate and correlate the content of events received from multiple resources to determine a symptom that relates to that particular combination of events. Logically, this symptom is passed to the analyze function. Autonomic managers must collect and process large amounts of data from the touchpoint sensor interface of a managed resource.An autonomic manager's ability to rapidly organize and make sense of this data is crucial to its successful operation.
* Analyze
The analyze function provides the mechanisms to observe and analyze situations to determine if some change needs to be made. For example, the requirement to enact a change may occur when the analyze function determines that some policy is not being met. The analyze function is responsible for determining if the autonomic manager can abide by the established policy, now and in the future. In many cases, the analyze function models complex behavior so it can employ prediction techniques such as time-series forecasting and queuing models. These mechanisms allow the autonomic manager to learn about the IT environment and help predict future behavior. Autonomic managers must be able to perform complex data analysis and reasoning on the symptoms provided by the monitor function. The analysis is influenced by stored knowledge data.If changes are required, the analyze function generates a change request and logically passes that change request to the plan function. The change request describes the modifications that the analyze component deems necessary or desirable.
*Plan
The plan function creates or selects a procedure to enact a desired alteration in the managed resource. The plan function can take on many forms, ranging from a single command to a complex workflow. The plan function generates the appropriate change plan, which represents a desired set of changes for the managed resource, and logically passes that change plan to the execute function. *Execute
The execute function provides the mechanism to schedule and perform the necessary changes to the system. Once an autonomic manager has generated a change plan that corresponds to a change request, some actions may need to be taken to modify the state of one or more managed resources. The execute function of an autonomic manager is responsible for carrying out the procedure that was generated by the plan function of the autonomic manager through a series of actions. These actions are performed using the touchpoint effector interface of a managed resource. Part of the execution of the change plan could involve updating the knowledge that is used by the autonomic manager Knowledge Source
A knowledge source is an implementation of a registry, dictionary, database or other repository that provides access to knowledge according to the interfaces prescribed by the architecture. In an autonomic system, knowledge consists of particular types of data with architected syntax and semantics, such as symptoms, policies, change requests and change plans. This knowledge can be stored in a knowledge source so that it can be shared among autonomic managers.The knowledge stored in knowledge sources can be used to extend the knowledge capabilities of an autonomic manager. An autonomic manager can load knowledge from one or more knowledge sources, and the autonomic manager's manager can activate that knowledge, allowing the autonomic manager to perform additional management tasks (such as recognizing particular symptoms or applying certain policies).
Knowledge Data used by the autonomic manager's four functions (monitor, analyze, plan and execute) are stored as shared knowledge. The shared knowledge includes data such as topology information, historical logs, metrics, symptoms and policies.
The knowledge used by an autonomic manager is obtained in one of three ways:
(1) The knowledge is passed to the autonomic manager.An autonomic manager might obtain policy knowledge in this manner. A policy consists of a set of behavioral constraints or preferences that influence the decisions made by an autonomic manager.
(2) The knowledge is retrieved from an external knowledge source.An autonomic manager might obtain symptom definitions or resource-specific historical knowledge in this manner. A knowledge source could store symptoms that could be used by an autonomic manager; a log file may contain a detailed history in the form of entries that signify events that have occurred in a component or system.
(3) The autonomic manager itself creates the knowledge.The knowledge used by a particular autonomic manager could be created by the monitor part, based on the information collected through sensors. The monitor part might create knowledge based on recent activities by logging the notifications that it receives from a managed resource. The execute part of an autonomic manager might update the knowledge to indicate the actions that were taken as a result of the analysis and planning (based on the monitored data), the execute part would then indicate how those actions affected the managed resource (based on subsequent monitored data obtained from the managed resource after the actions were carried out). This knowledge is contained within the autonomic manager, as represented by the "knowledge" block in Figure 4. If the knowledge is to be shared with other autonomic managers, it must be placed into a knowledge source.
Knowledge types include solution topology knowledge, policy knowledge, and problem determination knowledge scenarios. Table 1 summarizes various types of knowledge that may be present in a self-managing autonomic system. Each knowledge type must be expressed using common syntax and semantics so the knowledge can be shared.
*Solution Topology Knowledge-- Captures knowledge about the components and their construction and configuration for a solution or business system. .Installation and configuration knowledge is captured in a common installable unit format to eliminate complexity. The plan function of an autonomic manager can use this knowledge for installation and configuration planning.
* Policy Knowledge— A policy is knowledge that is consulted to determine whether or not changes need to be made in the system. An autonomic computing system requires a uniform method for defining the policies that govern the decision-making for autonomic managers. By defining policies in a standard way, they can be shared across autonomic managers to enable entire systems to be managed by a common set of policies.
* Problem Determination Knowledge— Problem determination knowledge includes monitored data, symptoms and decision trees. The problem determination process also may create knowledge. As the system responds to actions taken to correct problems, learned knowledge can be collected within the autonomic manager. An autonomic computing system requires a uniform method for representing problem determination knowledge, such as monitored data (common base events), symptoms and decision trees.
Manageability Interface
The manageability interface for controlling a manageable resource is organized into its sensor and effector interfaces. A touchpoint implements the sensor and effector behavior for specific manageable resource types by mapping the standard sensor and effector interfaces to one or more of the manageable resource's manageability interface mechanisms. The manageability interface reduces complexity by offering a standard interface to autonomic managers, rather than
the diverse manageability interface mechanisms associated with various types of manageable resources.A sensor consists of one or both of the following:
• A set of properties that expose information about the current state of a manageable resource and are accessed through standard "get" operations.
• A set of management events (unsolicited, asynchronous messages or notifications) that occur when the manageable resource undergoes state changes that merit reporting
These two parts of a sensor interface are referred to as interaction styles. The "get" operations use the request-response interaction style; events use the send-notification interaction style.
An effector consists of one or both of the following:
• A collection of "set" operations that allow the state of the manageable resource
to be changed in some way
• A collection of operations that are implemented by autonomic managers that
allow the manageable resource to make requests from its manager. The "set" operations use the perform-operation interaction style; requests use the solicit-response interaction style to allow the manageable resource to consult with its manager.
The sensor and effector in the architecture are linked together. For example, a configuration change that occurs through the effector should be reflected as a configuration change notification through the sensor interface. The linkage between the sensor and effector is more formally defined using the concept of manageability capabilities which means a logical collection of manageable resource state information and operations. Some examples are:
• Identification: state information and operations used to identify an instance of a
manageable resource
• Metrics: state information and operations for measurements of a manageable
resource, such as throughput, utilization and so on
Dent nfCSF
SNHCF Knlenrlv rv
• Configuration: state information and operations for the configurable attributes
of a manageable resource For each manageability capability, the client ofithe manageability interface must be able to obtain and control state data through the manageability interface, including:
• Meta details (for example, to identify properties that are used for configuration
of a manageable resource, or information that specifies which resources can be hosted by the manageable resource)
• Sensor interactions, including mechanisms for retrieving the current property
values (such as metrics, configuration) and available notifications (what types of events and situations the manageable resource can generate)
• Effector interactions, including operations to change the state (which effector
operations and interaction styles the manageable resource supports) and call-outs to request changes to existing state (what types of call-outs the manageable resource can perform) Enterprise Service Bus
An enterprise service bus is an implementation that assists in integrating other building blocks by directing the interactions among these building blocks.The enterprise service bus can be used to "connect" various autonomic computing building blocks. The role that a particular logical instance of the enterprise service bus performs is established by autonomic computing usage patterns such as:
• An enterprise service bus that aggregates multiple manageability mechanisms
for a single manageable resource;
• An enterprise service bus that enables an autonomic manager to manage multiple touchpoints;
• An enterprise service bus that enables multiple autonomic managers to manage
a single touchpoint;
• An enterprise service bus that enables multiple autonomic managers to manage
multiple touchpoints.
4.Benefits
Autonomic computing was conceived to lessen the spiraling demands for skilled IT resources, reduce complexity and to drive computing into a new era that may better exploit its potential to support higher order thinking and decision
making. Immediate benefits will include reduced dependence on human
intervention to maintain complex systems accompanied by a substantial decrease
in costs. Long-term benefits will allow individuals, organizations and businesses
to collaborate on complex problem solving.
intervention to maintain complex systems accompanied by a substantial decrease
in costs. Long-term benefits will allow individuals, organizations and businesses
to collaborate on complex problem solving.
Short-term IT related benefits
• Simplified user experience through a more responsive, real-time system.
• Cost-savings - scale to use.
• Scaled power, storage and costs that optimize usage across both hardware and software.
• Full use of idle processing power, including home PC's, through networked system.
• Natural language queries allow deeper and more accurate returns.
• Seamless access to multiple file types. Open standards will allow users to pull data from all potential sources by re-formatting on the fly.
• Stability. High availability. High security system. Fewer system or network errors due to self-healing
• Improved computational capacity
Long-term/ Higher Order Benefits
• Realize the vision of enablement by shifting available resources to higher-order business.
• Embedding autonomic" capabilities in client or access devices, servers, storage systems, middleware, and the network itself. Constructing autonomic federated systems.
• Achieving end-to-end service level management."
• Accelerated implementation of new capabilities
• Collaboration and global problem-solving. Distributed computing allows for more immediate sharing of information and processing power to use complex mathematics to solve problems.
• Massive simulation - weather, medical - complex calculations like protein folding, which require processors to run 24/7 for as long as a year at a time.
5.ChaIIenges
To create autonomic systems researchers must address key challenges with varying levels of complexity. They are
• System identity: Before a system can transact with other systems it must know the extent of its own boundaries. How will we design our systems to define and redefine themselves in dynamic environments?
• Interface design: With a multitude of platforms running, system administrators face a, How will we build consistent interfaces and points of control while allowing for a heterogeneous environment?
• Translating business policy into I/T policy: The end result needs to be transparent to the user. How will we create human interfaces that remove complexity and allow users to interact naturally with I/T systems?
• Systemic approach: Creating autonomic components is not enough. How can we unite a constellation of autonomic components into a federated system?
• Standards: The age of proprietary solutions is over. How can we design and support open standards that will work?
• Adaptive algorithms: New methods will be needed to equip our systems to deal with changing environments and transactions. How will we create adaptive algorithms to take previous system experience and use that information to improve the rules?
• Improving network-monitoring functions to protect security, detect potential threats and achieve a level of decision-making that allows for the redirection of key activities or data.
• Smarter microprocessors that can detect errors and anticipate failures.
6.C onclusion
The autonomic concept has been adopted by today's leading vendors and incorporated into their products. Aware that success is tied to interoperability, many are participating in the standards development necessary to provide the foundation for self-managing technological ecosystems, and are integrating standards into their technology.
IBM is making a substantial investment in the autonomic concept and has released its first wave of standards-based components, tools and knowledge capital. IBM offers a wide array of service offerings, backed by methodology and tools, which enable and support the adoption of Autonomic Computing.
Autonomic capabilities are critical to businesses with large and complex IT environments, those using Web Services and/or Service Oriented Architecture (SOA) models, and those that leverage e-business or e-commerce. They are also key enablers for smaller businesses seeking to take advantage of current technologies, because they help mask complexity by simplifying infrastructure management.
7.Future scope
Some current components and their proposed development under autonomic computing, includes SMS,SNMP,Adaptive network routing, network congestion control, high availability clustering, ESS, RAID, DB optimizer, virus management etc.
In case of SMS, its level of sophistication is Serving the world (i.e., people, business processes)used for Policy management and Storage tank(A policy managed storage for every file or folder, the user sets policies of availability, security, and performance. The system figures out where to put the data, what level of redundancy, what level of backup, etc. This is goal-oriented management)..It's Future goal is Policy language and protocols.
SNMP whose level of sophistication is Heterogeneous components interacting ,used for Mounties(enables goal-oriented recovery from system failure instead of procedural oriented recovery), Workload management. It's Future goal is Autonomic computing stack, Social policy, DB/storage co-optimization.
Adaptive network routing, network congestion control, high availability clustering have a level of sophistication of homogeneous components interacting.lt is used for Collective intelligence, Storage Bricks(The idea is to have higher redundancy than RAID, protection of performance hot spots with proactive copies and elimination of repair for life of system by building extra drives into the system). It's Future goal is new packaging concepts for storage(The idea is to change the packaging of an array of disks from a 2-D grid to a 3-D cube. There is a prototype of this called the IceCube which is basically the size of a medium-sized packing box.with size upto 1 Petabyte (10A15bytes), 250kW, 75dB air noise. Should last for 5 years without any service ever), Subscription computing.
Other components includes ESS, RAID, DB optimizer, virus management used for eLiza, SMART/LEO (Learning in Query Optimization), Software rejuvenation. It's Future goal is more of the same and better.A DB optimizer that learns from past performance. It will be in the next version of DB2.
No comments:
Post a Comment
leave your opinion