MIDDLEWARE AND ENTERPRISE INTEGRATION TECHNOLOGIES

Middleware is the software that connects software components or enterprise applications. Middleware is the software layer that lies between the operating system and the applications on each side of a distributed computer network. Typically, it supports complex, distributed business software applications.Middleware is the infrastructure which facilitates creation of business applications, and provides core services like concurrency, transactions, threading, messaging, and the SCA framework for service-oriented architecture (SOA) applications. It also provides security and enables high availability functionality to your enterprise.
Middleware includes Web servers, application servers, content management systems, and similar tools that support application development and delivery. It is especially integral to information technology based on Extensible Markup Language (XML), Simple Object Access Protocol (SOAP), Web services, SOA, Web 2.0 infrastructure, and Lightweight Directory Access Protocol (LDAP), etc.
Managing these applications and the underlying middleware technology can be difficult and IT organizations often have to rely on a variety of specialized tools. This can lead to inefficiency and may introduce complexities and risks. Enterprise Manager Grid Control is a definitive tool for middleware management and allows you to manage both Oracle applications and custom Java EE applications that run on a combination of Oracle Middleware and non Oracle Middleware software.
General Middleware
It includes the communication stacks, distributed directories, authentication services, network time, RPC, Queuing services along with the network OS extensions such as the distributed file and print services.

Service Specific Middleware
It is needed to accomplish a particular Client/Server type of services which includes-
Database specific middleware
OLTP specific middleware
Groupware specific middleware
Object specific middleware
Internet specific middleware and
System management specific middleware.
The building blocks of client/server applications are:
1. Client
2. Middleware
3. Server
The client server building blocks
The Client Building Block
* Runs the client side of the application
* It runs on the OS that provides a GUI or an OOUI and that can access distributed services, wherever they may be.
* The client also runs a component of the Distributed System Management (DSM) element.
The Server Building Block
* Runs the server side of the application
* The server application typically runs on top of some shrink-wrapped server software package.
* The five contending server platforms for creating the next generation of client/server applications are SQL database severs, TP Monitors, groupware servers, Object servers and the Web server.
* The server side depends on the OS to interface with the middleware building block.
* It may be a simple agent or a shared object database etc.

The Middleware Building Block
* Runs on both the client and server sides of an application
* Middleware is the nervous system of the client/server infrastructure
* This also has the DSM component
DSM
* Runs on every node in the client/server network.
* A managing workstation collects information from all its agents on the network and displays it graphically.
* The managing workstation can also instruct its agents to perform actions on its behalf.
RPC
In computer science, a remote procedure call (RPC) is an inter-process communication that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote. When the software in question uses object-oriented principles, RPC is called remote invocation or remote method invocation.
n RPC is initiated by the client, which sends a request message to a known remote server to execute a specified procedure with supplied parameters. The remote server sends a response to the client, and the application continues its process. While the server is processing the call, the client is blocked (it waits until the server has finished processing before resuming execution), unless the client sends an asynchronous request to the server, such as an XHTTP call. There are many variations and subtleties in various implementations, resulting in a variety of different (incompatible) RPC protocols.
An important difference between remote procedure calls and local calls is that remote calls can fail because of unpredictable network problems. Also, callers generally must deal with such failures without knowing whether the remote procedure was actually invoked. Idempotent procedures (those that have no additional effects if called more than once) are easily handled, but enough difficulties remain that code to call remote procedures is often confined to carefully written low-level subsystems.
Java RMI
The Java Remote Method Invocation (Java RMI) is a Java API that performs the object-oriented equivalent of remote procedure calls (RPC), with support for direct transfer of serialized Java classes and distributed garbage collection.
1. The original implementation depends on Java Virtual Machine (JVM) class representation mechanisms and it thus only supports making calls from one JVM to another. The protocol underlying this Java-only implementation is known as Java Remote Method Protocol (JRMP).
2. In order to support code running in a non-JVM context, a CORBA version was later developed.
Usage of the term RMI may denote solely the programming interface or may signify both the API and JRMP, whereas the term RMI-IIOP (read: RMI over IIOP) denotes the RMI interface delegating most of the functionality to the supporting CORBA implementation.
Distributed Objects
The term distributed objects usually refers to software modules[disambiguation needed] that are designed to work together, but reside either in multiple computers connected via a network or in different processes inside the same computer. One object sends a message to another object in a remote machine or process to perform some task. The results are sent back to the calling object.
The term may also generally refer to one of the extensions of the basic object concept used in the context of distributed computing, such as replicated objects or live distributed objects.

1. Replicated objects are groups of software components (replicas) that run a distributed multi-party protocol to achieve a high degree of consistency between their internal states, and that respond to requests in a coordinated manner. Referring to the group of replicas jointly as an object reflects the fact that interacting with any of them exposes the same externally visible state and behavior.
2. Live distributed objects (or simply live objects) generalize the replicated object concept to groups of replicas that might internally use any distributed protocol, perhaps resulting in only a weak consistency between their local states. Live distributed objects can also be defined as running instances of distributed multi-party protocols, viewed from the object-oriented perspective as entities that have distinct identity, and that can encapsulate distributed state and behavior.
OMG
The Object Management Group (OMG) is an international non-profit organization supported by information systems vendors, software developers and users. OMG was founded in 1989, now has over 600 member organizations, and meets bi-monthly. OMG provides a widely supported framework for open, distributed, interoperable, scaleable, reusable, portable software components based on OMG-standard object-oriented interfaces. Objectives of the OMG architecture are:
1. to benefit application developers by making it much easier for developers to build large-scale, industrial strength applications from a widely available toolkit of standard components, and
2. to benefit end-users by providing a common semantics that goes far beyond the notion of a common look-and-feel in todays desktop applications.

Most work occurs in between meetings but decisions are made in key OMG committees: OMG Architecture Board, Platform and Domain Technical Committees, various Task Forces, Special Interest Groups, and Subcommittees, which all report to Architecture Board. Task Forces typically issue Requests for Information (RFI) to help define a part of OMGs overall Object Management Architecture, then define an architecture and roadmap and finally issue a series of Requests for Proposals (RFPs) to industry for interfaces to object services (software components) that populate the architecture. RFP respondents provide candidate interface specifications and promise conformant commercially available implementations. >From RFP issue to technology adoption is typically one year; commercial implementations are typically available a year after that.
There are currently two Platform Technical Committee (PTC) task forces: the newly combined Object Request Broker/Object Services TF (ORBOS) and the Common Facilities TF. The Internet SIG reports to the PTC. There are currently several (relatively new) Domain Technical Committee (DTC) task forces and SIGs: Financial TF, Manufacturing TF, Health Care TF, Business Objects TF, Analysis and Design TF, GIS SIG, Multimedia and Electronic Commerce SIG, and End User (mirroring OMGs potential for providing industry-specific object standards). There are currently two standing subcommittees: Policies and Procedures and Liaison, the latter concerned with insuring that OMG builds and maintains close liaisons with other industry groups and standards development organizations.
Overview of CORBA
The Common Object Request Broker Architecture (CORBA) [OMG:95a] is an emerging open distributed object computing infrastructure being standardized by the Object Management Group (OMG). CORBA automates many common network programming tasks such as object registration, location, and activation; request demultiplexing; framing and error-handling; parameter marshalling and demarshalling; and operation dispatching. See the OMG Web site for more overview material on CORBA. See my CORBA page for additional information on CORBA, including our tutorials and research on high-performance and real-time ORBs. Results from our research on high-performance and real-time CORBA are freely available for downloading in the open-source TAO ORB.
Overview of COM/DCOM
COM
Component Object Model (COM) is a binary-interface standard for software components introduced by Microsoft in 1993. It is used to enable interprocess communication and dynamic object creation in a large range of programming languages. COM is the basis for several other Microsoft technologies and frameworks, including OLE, OLE Automation, ActiveX, COM+, DCOM, the Windows shell, DirectX, and Windows Runtime.

The essence of COM is a language-neutral way of implementing objects that can be used in environments different from the one in which they were created, even across machine boundaries. For well-authored components, COM allows reuse of objects with no knowledge of their internal implementation, as it forces component implementers to provide well-defined interfaces that are separated from the implementation. The different allocation semantics of languages are accommodated by making objects responsible for their own creation and destruction through reference-counting. Casting between different interfaces of an object is achieved through the QueryInterface method. The preferred method of inheritance within COM is the creation of sub-objects to which method calls are delegated.
COM is an interface technology defined and implemented as standard only on Microsoft Windows and Apples Core Foundation 1.3 and later plug-in API, that in any case implement only a subset of the whole COM interface. For some applications, COM has been replaced at least to some extent by the Microsoft .NET framework, and support for Web Services through the Windows Communication Foundation (WCF). However, COM objects can be used with all .NET languages through .NET COM Interop. Networked DCOM uses binary proprietary formats, while WCF encourages the use of XML-based SOAP messaging. COM is very similar to other component software interface technologies, such as CORBA and Java Beans, although each has its own strengths and weaknesses.
Unlike C++, COM provides a stable ABI that does not change between compiler releases. This makes COM interfaces attractive for object-oriented C++ libraries that are to be used by clients compiled using different compiler versions.
Microsoft® Distributed COM (DCOM) extends the Component Object Model (COM) to support communication among objects on different computers-on a LAN, a WAN, or even the Internet. With DCOM, your application can be distributed at locations that make the most sense to your customer and to the application.
DCOM
Because DCOM is a seamless evolution of COM, the worlds leading component technology, you can take advantage of your existing investment in COM-based applications, components, tools, and knowledge to move into the world of standards-based distributed computing. As you do so, DCOM handles low-level details of network protocols so you can focus on your real business: providing great solutions to your customers.

DCOM is an extension of the Component Object Model (COM). COM defines how components and their clients interact. This interaction is defined such that the client and the component can connect without the need of any intermediary system component. The client calls methods in the component without any overhead whatsoever.
In todays operating systems, processes are shielded from each other. A client that needs to communicate with a component in another process cannot call the component directly, but has to use some form of interprocess communication provided by the operating system. COM provides this communication in a completely transparent fashion: it intercepts calls from the client and forwards them to the component in another process. When client and component reside on different machines, DCOM simply replaces the local interprocess communication with a network protocol. Neither the client nor the component is aware that the wire that connects them has just become a little longer.
Overview of EJB
EJB stands for Enterprise Java Beans. EJB is an essential part of a J2EE platform. J2EE platform have component based architecture to provide multi-tiered, distributed and highly transactional features to enterprise level applications.
EJB provides an architecture to develop and deploy component based enterprise applications considering robustness, high scalability and high performance. An EJB application can be deployed on any of the application server compliant with J2EE 1.3 standard specification.
Benefits
1. Simplified development of large scale enterprise level application.
2. Application Server/ EJB container provides most of the system level services like transaction handling, logging, load balancing, persistence mechanism, exception handling and so on. Developer has to focus only on business logic of the application.
3. EJB container manages life cycle of ejb instances thus developer needs not to worry about when to create/delete ejb objects.
EJB Architecture
EJBs are based conceptually on the Java Remote Method Invocation (RMI) model. For example, remote object access and parameter passing for EJBs follow the RMI specification.The EJB specification does not prescribe that the transport mechanism has to be pure RMI. The Oracle8i EJB server uses RMI over IIOP for its transport protocol, a practice that is becoming common among server vendors.
Basic Concepts
Before going into details about implementing EJBs, some basic concepts must be clarified. First of all, recall that a bean runs in a container. The container, which is part of the EJB server, provides a number of services to the bean. These include transaction services, synchronization services, and security.
To provide these services, the bean container must be able to intercept calls to bean methods. For example, a client application calls a bean method that has a transaction attribute that requires the bean to create a new transaction context. The bean container must be able to interpose code to start a new transaction before the method call, and to commit the transaction, if possible, when the method completes, and before any values are returned to the client.
For this reason and others, a client application does not call the remote bean methods directly. Instead, the client invokes the bean method through a two-step process, mediated by the ORB and by the container.
First, the client actually calls a local proxy stub for the remote method. The stub marshalls any parameter data, and then calls a remote skeleton on the server. The skeleton unmarshalls the data, and upcalls to the bean container. This step is required because of the remote nature of the call. Note that this step is completely transparent both to the client application developer as well as to the bean developer. It is a detail that you do not need to know about to write your application code, either on the client or the server. Nevertheless, it is useful to know what is going on, especially when it comes to understanding what happens during bean deployment.

In the second step, the bean container gets the skeleton upcall, then interposes whatever services are required by the context. These can include:
1. authenticating the client, on the first method call
2. performing transaction management
3. calling synchronization methods in the bean itself (see Session Synchronization)
4. identity checks and switch
The container then delegates the method call to the bean. The bean method executes. When it returns, the thread of control returns to the bean container, which interposes whatever services are required by the context. For example, if the method is running in a transaction context, the bean container performs a commit operation, if possible. This depends on the transaction attributes in the bean descriptor.
Then the bean container calls the skeleton, which marshalls return data, and returns it to the client stub. These steps are completely invisible to client-side and server-side application developers. One of the major advantages of the EJB development model is that it hides the complexity of transaction and identity management from developers.
Building and Deploying EJBs
1. Each EJB must provided with a deployment descriptor
2. The deployment descriptor is serialized instance of a Java class that contains information about how to deploy the bean.
3. Two flavors of deployment descriptors:
Session Descriptors – apply to Session EJBs
Entity Descriptors – apply to Entity EJBs
4. A Deployment Descriptor contains information such as the following:
The name of the EJB Class
The name of the EJB Home Interface
The name of the EJB Remote Interface
ACLs of entities authorized to use each class or method.
For Entity Beans, a list of container – managed fields.
For Session Beans, a value denoting whether the bean is stateful or stateless.
Properties File may be generated, which contains environment properties that will be activated during runtime.
Manifest File is needed to create the ejb-jar file, which is the medium used to distribute EJBs.
Deploying the EJBs:
1. At the deployment time, the EJB container must read the ejb-jar file, create implementations for the home and remote interfaces.
2. Reads the deployment-descriptor and ensures that the bean has what it wants and ad the bean’s property settings to the environment so that they’re available to the bean at runtime.

EJB Applications
The workbench provides a specialized environment that you can use to develop and test enterprise beans that conform to the distributed component architecture defined in the Sun Microsystems Enterprise JavaBeans™ (EJB) specification. This product supports the Enterprise JavaBeans 1.1, 2.0, and 2.1 specification levels.
This product also supports extended Enterprise JavaBeans functionality provided by WebSphere® Application Server, including extensions to the specification and security and other bindings. The complete Enterprise JavaBeans specifications and descriptions of the technology are available from the java.sun.com Web site.
The EJB development environment includes the following tools:
The J2EE perspective
Tools for importing existing EJB JAR files
Tools for creating enterprise beans and access beans
Tools for building data persistence into enterprise beans
Tools for generating deployment code
Tools for validating your enterprise beans for specification compliance
Session beans
One way to think about the application logic layer (middle tier) is as a set of objects that, together, implement the business logic of an application. Session beans are the construct in EJBs designed for this purpose. As shown in Figure 2, there may be multiple session beans in an application. Each handles a subset of the applications business logic.
A session bean tends to be responsible for a group of related functionality. For example, an application for an educational institution might have a session bean whose methods contain logic for handling student records. Another session bean might contain logic that maintains the lists of courses and programs available at that institution.
There are two types of session beans, which are defined by their use in a client interaction:

1. Stateless: These beans do not declare any instance (class-level) variables, so that the methods contained within can act only on any local parameters. There is no way to maintain state across method calls.
2. Stateful: These beans can hold client state across method invocations. This is possible with the use of instance variables declared in the class definition. The client will then set the values for these variables and use these values in other method calls.
There may be more work involved for the server to share stateful session beans than is required to share stateless beans. Storing the state of an EJB is a very resource-intensive process, so an application that uses stateful beans may not be easily scalable. Stateless session beans provide excellent scalability, because the EJB container does not need to keep track of their state across method calls.
All EJBs, session beans included, operate within the context of an EJB server. An EJB server contains constructs known as EJB containers, which are responsible for providing an operating environment for managing and providing services to the EJBs that are running within it. In a typical scenario, the user interface (UI) of an application calls the methods of the session beans as it requires the functionality that they provide. Session beans can call other session beans and entity beans.
Entity beans
Before object orientation became popular, programs were usually written in procedural languages and often employed relational databases to hold the data. Because of the strengths and maturity of relational database technology, it is now often advantageous to develop object-oriented applications that use relational databases. The problem with this approach is that there is an inherent difference between object-oriented and relational database technologies, making it less than natural for them to coexist in one application. The use of entity beans is one way to get the best of both of these worlds, for the following reasons:
1. Entity beans are objects, and they can be designed using object-oriented principles and used in applications as objects.
2. The data in these entity bean objects are persisted in some data store, usually relational databases. All of the benefits of relational technologies—including maturity of products, speed, reliability, ability to recover, and ease of querying—can be leveraged.
In a typical EJB scenario, when a session bean needs to access data, it calls the methods of an entity bean. Entity beans represent the persistent data in an EJB application. For example, an application for an educational institution might have an entity bean named Student that has one instance for every student that is enrolled in an institution. Entity beans, often backed by a relational database, read and write to tables in the database. Because of this, they provide an object-oriented abstraction to some information store.

EJB Deployment
The EJB deployment process tends to vary between application servers for several reasons. First, the EJB specification provides minimal guidance in describing how you deploy EJBs. Therefore, vendors adhere to the J2EE/EJB specification to varying degrees; they use their own discretion in defining the deployment process for their specific platforms.
Second, since deployment is an area where application server vendors can add value and uniqueness to their product, they are not motivated to standardize the process. In addition, this lack of standardization presents opportunities for application server and Java IDE vendors to integrate their toolset offerings to facilitate EJB development and deployment a process only facilitated if your preferred IDE supports your targeted application server. Nevertheless, the process for deploying a given EJB must be taken on a case-by-case basis, and is usually unique to each application server.
To illustrate this, we experimented with two EJB implementations, a Lightweight Directory Access Protocol (LDAP) stateless session bean and a simple container-managed persistence (CMP) entity bean. In this article, we identify some of the major differences associated with deploying the same EJB code in WebLogic, WebSphere, and JBoss application servers. One notable caveat: we did as much as possible by hand (we wrote our code and deployment descriptors, and created the jar files using vi/Emacs and the JDKs jar utility), all the while recognizing that EJB deployment may go smoother if we use enterprise development tools closely aligned with a specific application server vendor (e.g. VisualAge for Java used in conjunction with WebSphere).
CORBA
The Common Object Request Broker Architecture (CORBA) is a standard defined by the Object Management Group (OMG) designed to facilitate the communication of systems that are deployed on diverse platforms. CORBA enables collaboration between systems on different operating systems, programming languages, and computing hardware. CORBA has many of the same design goals as object-oriented programming: encapsulation and reuse. CORBA uses an object-oriented model although the systems that utilize CORBA do not have to be object-oriented. CORBA is an example of the distributed object paradigm.

CORBA enables software written in different languages and running on different computers to work with each other seamlessly. Implementation details from specific operating systems, programming languages, and hardware platforms are all removed from the responsibility of developers who use CORBA. CORBA normalizes the method-call semantics between application objects residing either in the same address-space (application) or in remote address-spaces (same host, or remote host on a network). Version 1.0 was released in October 1991.
CORBA uses an interface definition language (IDL) to specify the interfaces which objects present to the outer world. CORBA then specifies a mapping from IDL to a specific implementation language like C++ or Java. Standard mappings exist for Ada, C, C++, C++11, Lisp, Ruby, Smalltalk, Java, COBOL, PL/I and Python. There are also non-standard mappings for Perl, Visual Basic, Erlang, and Tcl implemented by object request brokers (ORBs) written for those languages.
The CORBA specification dictates there shall be an ORB through which an application would interact with other objects. This is how it is implemented in practice: 1. The application simply initializes the ORB, and accesses an internal Object Adapter, which maintains things like reference counting, object (and reference) instantiation policies, and object lifetime policies. 2. The Object Adapter is used to register instances of the generated code classes. Generated code classes are the result of compiling the user IDL code, which translates the high-level interface definition into an OS- and language-specific class base for use by the user application. This step is necessary in order to enforce CORBA semantics and provide a clean user process for interfacing with the CORBA infrastructure.
Some IDL mappings are more difficult to use than others. For example, due to the nature of Java, the IDL-Java mapping is rather straightforward and makes usage of CORBA very simple in a Java application. This is also true of the IDL to Python mapping. The C++ mapping is notoriously difficult; the mapping requires the programmer to learn complex and confusing datatypes that predate the C++ Standard Template Library (STL). By contrast, the C++11 mapping is very easy to use, as it uses the STL heavily. Since the C language is not object-oriented, the IDL to C mapping requires a C programmer to manually emulate object-oriented features.
The following describes some of the most significant ways that CORBA can be used to facilitate communication among distributed objects.

Objects By Reference
This reference is either acquired through a stringified Uniform Resource Identifier (URI), NameService lookup (similar to Domain Name System (DNS)), or passed-in as a method parameter during a call.
Object references are lightweight objects matching the interface of the real object (remote or local). Method calls on the reference result in subsequent calls to the ORB and blocking on the thread while waiting for a reply, success or failure. The parameters, return data (if any), and exception data are marshaled internally by the ORB according to the local language and OS mapping.
Data By Value
The CORBA Interface Definition Language provides the language- and OS-neutral inter-object communication definition. CORBA Objects are passed by reference, while data (integers, doubles, structs, enums, etc.) are passed by value. The combination of Objects-by-reference and data-by-value provides the means to enforce strong data typing while compiling clients and servers, yet preserve the flexibility inherent in the CORBA problem-space.
Objects By Value (OBV)
Apart from remote objects, the CORBA and RMI-IIOP define the concept of the OBV and Valuetypes. The code inside the methods of Valuetype objects is executed locally by default. If the OBV has been received from the remote side, the needed code must be either a priori known for both sides or dynamically downloaded from the sender. To make this possible, the record, defining OBV, contains the Code Base that is a space-separated list of URLs from where this code should be downloaded. The OBV can also have the remote methods.
CORBA Component Model (CCM)
CORBA Component Model (CCM) is an addition to the family of CORBA definitions. It was introduced with CORBA 3 and it describes a standard application framework for CORBA components. Though not dependent on "language dependent Enterprise Java Beans (EJB)", it is a more general form of EJB, providing four component types instead of the two that EJB defines. It provides an abstraction of entities that can provide and accept services through well-defined named interfaces called ports.
The CCM has a component container, where software components can be deployed. The container offers a set of services that the components can use. These services include (but are not limited to) notification, authentication, persistence and transaction processing. These are the most-used services any distributed system requires, and, by moving the implementation of these services from the software components to the component container, the complexity of the components is dramatically reduced.

Benefits
CORBAs benefits include language- and OS-independence, freedom from technology-linked implementations, strong data-typing, high level of tunability, and freedom from the details of distributed data transfers.
Language independence
CORBA at the outset was designed to free engineers from the hang-ups and limitations of considering their designs based on a particular software language. Currently there are many languages supported by various CORBA providers, the most popular being Java and C++. There are also C++11, C-only, SmallTalk, Perl, Ada, Ruby, and Python implementations, just to mention a few.
OS-independence
CORBAs design is meant to be OS-independent. CORBA is available in Java (OS-independent), as well as natively for Linux/Unix, Windows, Solaris, OS X, OpenVMS, HPUX, Android, and others.
Freedom from technologies
One of the main implicit benefits is that CORBA provides a neutral playing field for engineers to be able to normalize the interfaces between various new and legacy systems. When integrating C, C++, Object Pascal, Java, Fortran, Python, and any other language or OS into a single cohesive system design model, CORBA provides the means to level the field and allow disparate teams to develop systems and unit tests that can later be joined together into a whole system. This does not rule out the need for basic system engineering decisions, such as threading, timing, object lifetime, etc. These issues are part of any system regardless of technology. CORBA allows system elements to be normalized into a single cohesive system model.
For example, the design of a multitier architecture is made simple using Java Servlets in the web server and various CORBA servers containing the business logic and wrapping the database accesses. This allows the implementations of the business logic to change, while the interface changes would need to be handled as in any other technology. For example, a database wrapped by a server can have its database schema change for the sake of improved disk usage or performance (or even whole-scale database vendor change), without affecting the external interfaces. At the same time, C++ legacy code can talk to C/Fortran legacy code and Java database code, and can provide data to a web interface.

Data-typing
CORBA provides flexible data typing, for example an "ANY" datatype. CORBA also enforces tightly coupled datatyping, reducing human errors. In a situation where Name-Value pairs are passed around, it is conceivable that a server provides a number where a string was expected. CORBA Interface Definition Language provides the mechanism to ensure that user-code conforms to method-names, return-, parameter-types, and exceptions.
High tunability
Many implementations (e.g. OmniORB (Open source C++ and Python implementation))[1] have options for tuning the threading and connection management features. Not all implementations provide the same features. This is up to the implementor.
Freedom from data-transfer details
When handling low-level connection and threading, CORBA provides a high level of detail in error conditions. This is defined in the CORBA-defined standard exception set and the implementation-specific extended exception set. Through the exceptions, the application can determine if a call failed for reasons such as "Small problem, so try again", "The server is dead" or "The reference does not make sense." The general rule is: Not receiving an exception means that the method call completed successfully. This is a very powerful design feature.
Compression
CORBA marshals its data in a binary form and supports compression. IONA, Remedy IT, and Telefónica have worked on an extension to the CORBA standard that delivers compression. This extension is called ZIOP and this is now a formal OMG standard.
Structure of CORBA IDL
The CORBA Interface Definition Language (IDL) is used to define interfaces to objects in your network. This chapter introduces the features of CORBA IDL and illustrates the syntax used to describe interfaces. The first step in developing a CORBA application is to define the interfaces to the objects required in your distributed system. To define these interfaces, you use CORBA IDL.
IDL allows you to define interfaces to objects without specifying the implementation of those interfaces. To implement an IDL interface you must:
1. Define a Java class that can be accessed through the IDL interface.
2. Create objects of that class within an Orbix Java server application.
You can implement IDL interfaces using any programming language for which an IDL mapping is available. An IDL mapping specifies how an interface defined in IDL corresponds to an implementation defined in a programming language. CORBA applications written in different programming languages are fully interoperable. CORBA defines standard mappings from IDL to several programming languages, including C++, Java, and Smalltalk. The Orbix Java IDL compiler converts IDL definitions to corresponding Java definitions, in accordance with the standard IDL to Java mapping.
CORBA interface repository

The Interface Repository is based on the CORBA definition of an Interface Repository. It offers a proper subset of the interfaces defined by CORBA; that is, the APIs that are exposed to programmers are implemented as defined by the Common Object Request Broker: Architecture and Specification Revision 2.4. However, not all interfaces are supported. In general, the interfaces required to read from the Interface Repository are supported, but the interfaces required to write to the Interface Repository are not. Additionally, not all TypeCode interfaces are supported.
Administration of the Interface Repository is done using tools specific to the Oracle Tuxedo software. These tools allow the system administrator to create an Interface Repository, populate it with definitions specified in Object Management Group Interface Definition Language (OMG IDL), and then delete interfaces. Additionally, an administrator may need to configure the system to include an Interface Repository server. For a description of the Interface Repository administration commands, see the Oracle Tuxedo Command Reference and Setting Up an Oracle Tuxedo Application.
Several abstract interfaces are used as base interfaces for other objects in the Interface Repository. A common set of operations is used to locate objects within the Interface Repository. These operations are defined in the abstract interfaces IRObject, Container, and Contained described in this chapter. All Interface Repository objects inherit from the IRObject interface, which provides an operation for identifying the actual type of the object. Objects that are containers inherit navigation operations from the Container interface. Objects that are contained by other objects inherit navigation operations from the Contained interface. The IDLType interface is inherited by all Interface Repository objects that represent OMG IDL types, including interfaces, typedefs, and anonymous types. The TypedefDef interface is inherited by all named noninterface types.
The Interface Repository consists of two distinct components: the database and the server. The server performs operations on the database. The Interface Repository database is created and populated using the idl2ir administrative command. For a description of this command, see the Oracle Tuxedo Command Reference and Setting Up an Oracle Tuxedo Application. From the programmer’s point of view, there is no write access to the Interface Repository. None of the write operations defined by CORBA are supported, nor are set operations on nonread-only attributes.
Read access to the Interface Repository database is always through the Interface Repository server; that is, a client reads from the database by invoking methods that are performed by the server. The read operations as defined by the CORBA Common Object Request Broker: Architecture and Specification.

CORBA Services and CORBA Component Model
CORBA Services
Another important part of the CORBA standard is the definition of a set of distributed services to support the integration and interoperation of distributed objects. As depicted in the graphic below, the services, known as CORBA Services or COS, are defined on top of the ORB. That is, they are defined as standard CORBA objects with IDL interfaces. As such, the services are sometimes referred to as "Object Services."
COM and .NET
COM and .NET are complementary development technologies. The .NET Common Language Runtime provides bi-directional, transparent integration with COM. This means that COM and .NET applications and components can use functionality from each system. This protects your existing investments in COM applications while allowing you to take advantage of .NET at a controlled pace. COM and .NET can achieve similar results. The .NET Framework provides developers with a significant number of benefits including a more robust, evidence-based security model, automatic memory management and native Web services support. For new development, Microsoft recommends .NET as a preferred technology because of its powerful managed runtime environment and services.

Marshalling
In computer science, marshalling or marshaling is the process of transforming the memory representation of an object to a data format suitable for storage or transmission, and it is typically used when data must be moved between different parts of a computer program or from one program to another. Marshalling is similar to serialization and is used to communicate to remote objects with an object, in this case a serialized object. It simplifies complex communication, using custom/complex objects to communicate instead of primitives. The opposite, or reverse, of marshalling is called unmarshalling (or demarshalling, similar to deserialization).
Marshalling is used within implementations of different remote procedure call (RPC) mechanisms, where it is necessary for transporting data between processes and/or between threads. In Microsofts Component Object Model (COM), interface pointers must be marshalled when crossing COM apartment boundaries (that is, crossing between instances of the COM library). In the .NET Framework, the conversion between an unmanaged type and a CLR type, as in the P/Invoke process, is also an example of an action that requires marshalling to take place.
Additionally, marshalling is used extensively within scripts and applications that utilize the XPCOM technologies provided within the Mozilla application framework. The Mozilla Firefox browser is a popular application built with this framework that additionally allows scripting languages to use XPCOM through XPConnect (Cross-Platform Connect).
Custom Marshalling
Whenever you want complete control over everything that happens in the entire communication process, you should use custom marshalling. With this scheme, nothing comes for free and you will have to implement the marshalling process yourself in its entirety. You could want to use custom marshalling, if you will be using some other transport mechanism for communication than the usual RPC mechanism, e.g. named pipes, HTTP or raw TCP/IP.
With standard marshalling, COM provides you with an implementation of the IMarshal interface, but with custom marshalling, IMarshal with its 6 member functions is – apart from the proxy/stub DLL and CoClass implementation - what you will have to implement yourself. The functions of IMarshal will be called by the COM API function CoMarshalInterface in order to package up the parameters and get them on their way. In reality, implementing custom marshalling is probably too much work to be practically useful. This is also the main reason why we have not dealt with this kind of marshalling.
Standard Marshalling
Sometimes the restrictions of type library marshalling are next to unacceptable. This is the case when you need more sophisticated structures than can be provided with safearrays or when you are trying to turn legacy code (e.g. a huge C library J ) into a COMponent.
In this case you will have to provide your own proxy/stub DLL for marshalling the structures. The traditional way of defining an interface is to use IDL and compile the file with Microsoft’s IDL compiler, MIDL. Based on the interface description, MIDL can generate a type library (.TLB), and the necessary C files for producing the proxy/stub DLL.
In Delphi 3.x, a type library is defined using a graphical tool, known as the Type Library Editor. This will automatically generate a type library, Pascal wrappers for the interface definition – akin to those listed in the appendices - and a skeleton implementation of the associated CoClass. Unfortunately, neither IDL structs or unions are supported in the editor.
Service Oriented architecture(SOA)
Service-oriented architecture (SOA) is a software design and software architecture design pattern based on distinct pieces of software providing application functionality as services to other applications. This is known as service-orientation. It is independent of any vendor, product or technology.
A service is a self-contained unit of functionality, such as retrieving an online bank statement. Services can be combined by other software applications to provide the complete functionality of a large software application. SOA makes it easy for computers connected over a network to cooperate. Every computer can run an arbitrary number of services, and each service is built in a way that ensures that the service can exchange information with any other service in the network without human interaction and without the need to make changes to the underlying program itself.
Services are unassociated, loosely coupled units of functionality that are self-contained. Each service implements at least one action, such as submitting an online application for an account, retrieving an online bank statement or modifying an online booking or airline ticket order. Within an SOA, services use defined protocols that describe how services pass and parse messages using description metadata, which in sufficient details describes not only the characteristics of these services, but also the data that drives them. Programmers have made extensive use of XML in SOA to structure data that they wrap in a nearly exhaustive description-container. Analogously, the Web Services Description Language (WSDL) typically describes the services themselves, while SOAP (originally Simple Object Access Protocol) describes the communications protocols.

design concept
SOA is based on the concept of a service. Depending on the service design approach taken, each SOA service is designed to perform one or more activities by implementing one or more service operations. As a result, each service is built as a discrete piece of code. This makes it possible to reuse the code in different ways throughout the application by changing only the way an individual service interoperates with other services that make up the application, versus making code changes to the service itself. SOA design principles are used during software development and integration.
SOA generally provides a way for consumers of services, such as web-based applications, to be aware of available SOA-based services. For example, several disparate departments within a company may develop and deploy SOA services in different implementation languages; their respective clients will benefit from a well-defined interface to access them. SOA defines how to integrate widely disparate applications for a Web-based environment and uses multiple implementation platforms. Rather than defining an API, SOA defines the interface in terms of protocols and functionality. An endpoint is the entry point for such a SOA implementation.
Service-orientation requires loose coupling of services with operating systems and other technologies that underlie applications. SOA separates functions into distinct units, or services, which developers make accessible over a network in order to allow users to combine and reuse them in the production of applications. These services and their corresponding consumers communicate with each other by passing data in a well-defined, shared format, or by coordinating an activity between two or more services. For some, SOA can be seen in a continuum from older concepts of distributed computing and modular programming, through SOA, and on to current practices of mashups, SaaS, and cloud computing (which some see as the offspring of SOA).
Enterprise Service Bus (ESB)
An enterprise service bus (ESB) is a software architecture model used for designing and implementing communication between mutually interacting software applications in a service-oriented architecture (SOA). As a software architectural model for distributed computing it is a specialty variant of the more general client server model and promotes agility and flexibility with regards to communication between applications. Its primary use is in enterprise application integration (EAI) of heterogeneous and complex landscapes.

The concept has been developed in analogy to the bus concept found in computer hardware architecture combined with the modular and concurrent design of high-performance computer operating systems. Motivation was to find a standard, structured and general purpose concept for describing implementation of loosely-coupled software components (called: services) that are expected to be independently deployed, running, heterogeneous and disparate within a network. ESB is also the intrinsically adopted network design of the World Wide Web and the common implementation pattern for service oriented architecture.
An ESB generally provides an abstraction layer on top of an implementation of an enterprise messaging system. In order for an integration broker to be considered a true ESB, it would need to have its base functions broken up into their constituent and atomic parts. The atomic components would then be capable of being separately deployed across the bus while working together in harmony as necessary.
Web Services Technologies
A Web service is a set of related application functions that can be programmatically invoked over the Internet. Businesses can dynamically mix and match Web services to perform complex transactions with minimal programming. Web services allow buyers and sellers all over the world to discover each other, connect dynamically, and execute transactions in real time with minimal human interaction.
The web is vast inter connected global information system. Information on the web is hosted on web sites which contain text, pictures and multimedia.This information can be viewed using browsers like internet explorer, firefox and lynx. These web sites are indexed by search engines which enable us to find required information. Popular search engines are Yahoo, Google and Msn. A web service is also hosted on a web server. We can access a web service programmatically, only programmatically and not by a browser. So, a web service uses the web to be accessed and used.
XML
XML is a mark up language. With a mark up language, we can structure a document using tags. Using XML, we can customize the tags also. Each bit of information in a document is defined by tags without overload of formatting present in HTML. This type of representation is suitable for application-to-application communication. Another feature of XML is that the vocabulary can be extended. Vocabulary refers to the types of tags used to structure a document in XML.
For example, you can create a tag called and use it to represent contact information in an electronic document. This extended vocabulary is supplied to the receiving application using what is called as the XML schema. XML Schema is a W3C.org standard. The XML schema is used by the receiving application to validate the document being received.
Many a times an application needs to transform the XML document being received into a different format. This is done by using XSLT by the receiving application. XMLs transformability is one of the reasons Web services work well for building portals. As shown in Figure, backend Web services deliver content to the portal in XML format. The portal presentation logic, called a portlet can transform the content as needed to work with the portal user interface, which might be a frame within a browser, a portable handset, or a standard telephone (using voice generation and voice recognition software). XML supports multichannel portal applications.
SOA
SOA stands for service-oriented architecture. SOA is a concept and describes a set of well- established patterns. Each pattern represents a mechanism to describe a service, to advertise and discover a service, and to communicate with a service. Web services rely on these patterns and client applications connect to a service using these patterns.
In the SOA concept, three basic roles are defined. They are :
Service provider - who develops or supplies the service.
Service consumer - who uses the service.
The service broker facilitates the advertising and discovery process.
The three basic operations in the SOA are register, find, and bind. The service provider registers the service with a service broker. A service consumer queries the service broker to find a compatible service. The service broker gives the service consumer directions on how to find the service and its service contract The service consumer uses the contract to bind the client to the service, at which point the client and service can communicate. The standard technologies for implementing the SOA patterns with Web services are Web Services Description Language (WSDL), Universal Description, Discovery & Integration (UDDI).and Simple Object Access Protocol (SOAP).
WSDL, UDD1, and SOAP are the three core technologies most often used to implement Web services. WSDL provides a mechanism to describe a Web service. UDDI provides a mechanism to advertise and discover a Web service. And SOAP provides a mechanism for clients and services to communicate. Figure 3-4 shows these technologies mapped to the SOA.
WSDL is an XML language that describes a Web service. A WSDL document describes
1.The functionality a Web service,
2.How the web service communicates,
3.Where the web service resides.

The WSDL document contains three parts which describes everything needed to call a Web service. This docusment / file can be compiled into application code, which a client application uses to access the Web service. This application code is called a client proxy. This means that WSDL document / file compiles into a proxy client. The client application calls the client proxy, and the proxy constructs the messages and manages the communication on behalf of the client application.

Middleware is the software that connects software components or enterprise applications. Middleware is the software layer that lies between the operating system and the applications on each side of a distributed computer network. Typically, it supports complex, distributed business software applications.Middleware is the infrastructure which facilitates creation of business applications, and provides core services like concurrency, transactions, threading, messaging, and the SCA framework for service-oriented architecture (SOA) applications. It also provides security and enables high availability functionality to your enterprise.
Middleware includes Web servers, application servers, content management systems, and similar tools that support application development and delivery. It is especially integral to information technology based on Extensible Markup Language (XML), Simple Object Access Protocol (SOAP), Web services, SOA, Web 2.0 infrastructure, and Lightweight Directory Access Protocol (LDAP), etc.
Managing these applications and the underlying middleware technology can be difficult and IT organizations often have to rely on a variety of specialized tools. This can lead to inefficiency and may introduce complexities and risks. Enterprise Manager Grid Control is a definitive tool for middleware management and allows you to manage both Oracle applications and custom Java EE applications that run on a combination of Oracle Middleware and non Oracle Middleware software.
General Middleware
It includes the communication stacks, distributed directories, authentication services, network time, RPC, Queuing services along with the network OS extensions such as the distributed file and print services.

Service Specific Middleware
It is needed to accomplish a particular Client/Server type of services which includes-
Database specific middleware
OLTP specific middleware
Groupware specific middleware
Object specific middleware
Internet specific middleware and
System management specific middleware.
The building blocks of client/server applications are:
1. Client
2. Middleware
3. Server
The client server building blocks
The Client Building Block
* Runs the client side of the application
* It runs on the OS that provides a GUI or an OOUI and that can access distributed services, wherever they may be.
* The client also runs a component of the Distributed System Management (DSM) element.
The Server Building Block
* Runs the server side of the application
* The server application typically runs on top of some shrink-wrapped server software package.
* The five contending server platforms for creating the next generation of client/server applications are SQL database severs, TP Monitors, groupware servers, Object servers and the Web server.
* The server side depends on the OS to interface with the middleware building block.
* It may be a simple agent or a shared object database etc.

The Middleware Building Block
* Runs on both the client and server sides of an application
* Middleware is the nervous system of the client/server infrastructure
* This also has the DSM component
DSM
* Runs on every node in the client/server network.
* A managing workstation collects information from all its agents on the network and displays it graphically.
* The managing workstation can also instruct its agents to perform actions on its behalf.
RPC
In computer science, a remote procedure call (RPC) is an inter-process communication that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote. When the software in question uses object-oriented principles, RPC is called remote invocation or remote method invocation.
n RPC is initiated by the client, which sends a request message to a known remote server to execute a specified procedure with supplied parameters. The remote server sends a response to the client, and the application continues its process. While the server is processing the call, the client is blocked (it waits until the server has finished processing before resuming execution), unless the client sends an asynchronous request to the server, such as an XHTTP call. There are many variations and subtleties in various implementations, resulting in a variety of different (incompatible) RPC protocols.
An important difference between remote procedure calls and local calls is that remote calls can fail because of unpredictable network problems. Also, callers generally must deal with such failures without knowing whether the remote procedure was actually invoked. Idempotent procedures (those that have no additional effects if called more than once) are easily handled, but enough difficulties remain that code to call remote procedures is often confined to carefully written low-level subsystems.
Java RMI
The Java Remote Method Invocation (Java RMI) is a Java API that performs the object-oriented equivalent of remote procedure calls (RPC), with support for direct transfer of serialized Java classes and distributed garbage collection.
1. The original implementation depends on Java Virtual Machine (JVM) class representation mechanisms and it thus only supports making calls from one JVM to another. The protocol underlying this Java-only implementation is known as Java Remote Method Protocol (JRMP).
2. In order to support code running in a non-JVM context, a CORBA version was later developed.
Usage of the term RMI may denote solely the programming interface or may signify both the API and JRMP, whereas the term RMI-IIOP (read: RMI over IIOP) denotes the RMI interface delegating most of the functionality to the supporting CORBA implementation.
Distributed Objects
The term distributed objects usually refers to software modules[disambiguation needed] that are designed to work together, but reside either in multiple computers connected via a network or in different processes inside the same computer. One object sends a message to another object in a remote machine or process to perform some task. The results are sent back to the calling object.
The term may also generally refer to one of the extensions of the basic object concept used in the context of distributed computing, such as replicated objects or live distributed objects.

1. Replicated objects are groups of software components (replicas) that run a distributed multi-party protocol to achieve a high degree of consistency between their internal states, and that respond to requests in a coordinated manner. Referring to the group of replicas jointly as an object reflects the fact that interacting with any of them exposes the same externally visible state and behavior.
2. Live distributed objects (or simply live objects) generalize the replicated object concept to groups of replicas that might internally use any distributed protocol, perhaps resulting in only a weak consistency between their local states. Live distributed objects can also be defined as running instances of distributed multi-party protocols, viewed from the object-oriented perspective as entities that have distinct identity, and that can encapsulate distributed state and behavior.
OMG
The Object Management Group (OMG) is an international non-profit organization supported by information systems vendors, software developers and users. OMG was founded in 1989, now has over 600 member organizations, and meets bi-monthly. OMG provides a widely supported framework for open, distributed, interoperable, scaleable, reusable, portable software components based on OMG-standard object-oriented interfaces. Objectives of the OMG architecture are:
1. to benefit application developers by making it much easier for developers to build large-scale, industrial strength applications from a widely available toolkit of standard components, and
2. to benefit end-users by providing a common semantics that goes far beyond the notion of a common look-and-feel in todays desktop applications.

Most work occurs in between meetings but decisions are made in key OMG committees: OMG Architecture Board, Platform and Domain Technical Committees, various Task Forces, Special Interest Groups, and Subcommittees, which all report to Architecture Board. Task Forces typically issue Requests for Information (RFI) to help define a part of OMGs overall Object Management Architecture, then define an architecture and roadmap and finally issue a series of Requests for Proposals (RFPs) to industry for interfaces to object services (software components) that populate the architecture. RFP respondents provide candidate interface specifications and promise conformant commercially available implementations. >From RFP issue to technology adoption is typically one year; commercial implementations are typically available a year after that.
There are currently two Platform Technical Committee (PTC) task forces: the newly combined Object Request Broker/Object Services TF (ORBOS) and the Common Facilities TF. The Internet SIG reports to the PTC. There are currently several (relatively new) Domain Technical Committee (DTC) task forces and SIGs: Financial TF, Manufacturing TF, Health Care TF, Business Objects TF, Analysis and Design TF, GIS SIG, Multimedia and Electronic Commerce SIG, and End User (mirroring OMGs potential for providing industry-specific object standards). There are currently two standing subcommittees: Policies and Procedures and Liaison, the latter concerned with insuring that OMG builds and maintains close liaisons with other industry groups and standards development organizations.
Overview of CORBA
The Common Object Request Broker Architecture (CORBA) [OMG:95a] is an emerging open distributed object computing infrastructure being standardized by the Object Management Group (OMG). CORBA automates many common network programming tasks such as object registration, location, and activation; request demultiplexing; framing and error-handling; parameter marshalling and demarshalling; and operation dispatching. See the OMG Web site for more overview material on CORBA. See my CORBA page for additional information on CORBA, including our tutorials and research on high-performance and real-time ORBs. Results from our research on high-performance and real-time CORBA are freely available for downloading in the open-source TAO ORB.
Overview of COM/DCOM
COM
Component Object Model (COM) is a binary-interface standard for software components introduced by Microsoft in 1993. It is used to enable interprocess communication and dynamic object creation in a large range of programming languages. COM is the basis for several other Microsoft technologies and frameworks, including OLE, OLE Automation, ActiveX, COM+, DCOM, the Windows shell, DirectX, and Windows Runtime.

The essence of COM is a language-neutral way of implementing objects that can be used in environments different from the one in which they were created, even across machine boundaries. For well-authored components, COM allows reuse of objects with no knowledge of their internal implementation, as it forces component implementers to provide well-defined interfaces that are separated from the implementation. The different allocation semantics of languages are accommodated by making objects responsible for their own creation and destruction through reference-counting. Casting between different interfaces of an object is achieved through the QueryInterface method. The preferred method of inheritance within COM is the creation of sub-objects to which method calls are delegated.
COM is an interface technology defined and implemented as standard only on Microsoft Windows and Apples Core Foundation 1.3 and later plug-in API, that in any case implement only a subset of the whole COM interface. For some applications, COM has been replaced at least to some extent by the Microsoft .NET framework, and support for Web Services through the Windows Communication Foundation (WCF). However, COM objects can be used with all .NET languages through .NET COM Interop. Networked DCOM uses binary proprietary formats, while WCF encourages the use of XML-based SOAP messaging. COM is very similar to other component software interface technologies, such as CORBA and Java Beans, although each has its own strengths and weaknesses.
Unlike C++, COM provides a stable ABI that does not change between compiler releases. This makes COM interfaces attractive for object-oriented C++ libraries that are to be used by clients compiled using different compiler versions.
Microsoft® Distributed COM (DCOM) extends the Component Object Model (COM) to support communication among objects on different computers-on a LAN, a WAN, or even the Internet. With DCOM, your application can be distributed at locations that make the most sense to your customer and to the application.
DCOM
Because DCOM is a seamless evolution of COM, the worlds leading component technology, you can take advantage of your existing investment in COM-based applications, components, tools, and knowledge to move into the world of standards-based distributed computing. As you do so, DCOM handles low-level details of network protocols so you can focus on your real business: providing great solutions to your customers.

DCOM is an extension of the Component Object Model (COM). COM defines how components and their clients interact. This interaction is defined such that the client and the component can connect without the need of any intermediary system component. The client calls methods in the component without any overhead whatsoever.
In todays operating systems, processes are shielded from each other. A client that needs to communicate with a component in another process cannot call the component directly, but has to use some form of interprocess communication provided by the operating system. COM provides this communication in a completely transparent fashion: it intercepts calls from the client and forwards them to the component in another process. When client and component reside on different machines, DCOM simply replaces the local interprocess communication with a network protocol. Neither the client nor the component is aware that the wire that connects them has just become a little longer.
Overview of EJB
EJB stands for Enterprise Java Beans. EJB is an essential part of a J2EE platform. J2EE platform have component based architecture to provide multi-tiered, distributed and highly transactional features to enterprise level applications.
EJB provides an architecture to develop and deploy component based enterprise applications considering robustness, high scalability and high performance. An EJB application can be deployed on any of the application server compliant with J2EE 1.3 standard specification.
Benefits
1. Simplified development of large scale enterprise level application.
2. Application Server/ EJB container provides most of the system level services like transaction handling, logging, load balancing, persistence mechanism, exception handling and so on. Developer has to focus only on business logic of the application.
3. EJB container manages life cycle of ejb instances thus developer needs not to worry about when to create/delete ejb objects.
EJB Architecture
EJBs are based conceptually on the Java Remote Method Invocation (RMI) model. For example, remote object access and parameter passing for EJBs follow the RMI specification.The EJB specification does not prescribe that the transport mechanism has to be pure RMI. The Oracle8i EJB server uses RMI over IIOP for its transport protocol, a practice that is becoming common among server vendors.
Basic Concepts
Before going into details about implementing EJBs, some basic concepts must be clarified. First of all, recall that a bean runs in a container. The container, which is part of the EJB server, provides a number of services to the bean. These include transaction services, synchronization services, and security.
To provide these services, the bean container must be able to intercept calls to bean methods. For example, a client application calls a bean method that has a transaction attribute that requires the bean to create a new transaction context. The bean container must be able to interpose code to start a new transaction before the method call, and to commit the transaction, if possible, when the method completes, and before any values are returned to the client.
For this reason and others, a client application does not call the remote bean methods directly. Instead, the client invokes the bean method through a two-step process, mediated by the ORB and by the container.
First, the client actually calls a local proxy stub for the remote method. The stub marshalls any parameter data, and then calls a remote skeleton on the server. The skeleton unmarshalls the data, and upcalls to the bean container. This step is required because of the remote nature of the call. Note that this step is completely transparent both to the client application developer as well as to the bean developer. It is a detail that you do not need to know about to write your application code, either on the client or the server. Nevertheless, it is useful to know what is going on, especially when it comes to understanding what happens during bean deployment.

In the second step, the bean container gets the skeleton upcall, then interposes whatever services are required by the context. These can include:
1. authenticating the client, on the first method call
2. performing transaction management
3. calling synchronization methods in the bean itself (see Session Synchronization)
4. identity checks and switch
The container then delegates the method call to the bean. The bean method executes. When it returns, the thread of control returns to the bean container, which interposes whatever services are required by the context. For example, if the method is running in a transaction context, the bean container performs a commit operation, if possible. This depends on the transaction attributes in the bean descriptor.
Then the bean container calls the skeleton, which marshalls return data, and returns it to the client stub. These steps are completely invisible to client-side and server-side application developers. One of the major advantages of the EJB development model is that it hides the complexity of transaction and identity management from developers.
Building and Deploying EJBs
1. Each EJB must provided with a deployment descriptor
2. The deployment descriptor is serialized instance of a Java class that contains information about how to deploy the bean.
3. Two flavors of deployment descriptors:
Session Descriptors – apply to Session EJBs
Entity Descriptors – apply to Entity EJBs
4. A Deployment Descriptor contains information such as the following:
The name of the EJB Class
The name of the EJB Home Interface
The name of the EJB Remote Interface
ACLs of entities authorized to use each class or method.
For Entity Beans, a list of container – managed fields.
For Session Beans, a value denoting whether the bean is stateful or stateless.
Properties File may be generated, which contains environment properties that will be activated during runtime.
Manifest File is needed to create the ejb-jar file, which is the medium used to distribute EJBs.
Deploying the EJBs:
1. At the deployment time, the EJB container must read the ejb-jar file, create implementations for the home and remote interfaces.
2. Reads the deployment-descriptor and ensures that the bean has what it wants and ad the bean’s property settings to the environment so that they’re available to the bean at runtime.

EJB Applications
The workbench provides a specialized environment that you can use to develop and test enterprise beans that conform to the distributed component architecture defined in the Sun Microsystems Enterprise JavaBeans™ (EJB) specification. This product supports the Enterprise JavaBeans 1.1, 2.0, and 2.1 specification levels.
This product also supports extended Enterprise JavaBeans functionality provided by WebSphere® Application Server, including extensions to the specification and security and other bindings. The complete Enterprise JavaBeans specifications and descriptions of the technology are available from the java.sun.com Web site.
The EJB development environment includes the following tools:
The J2EE perspective
Tools for importing existing EJB JAR files
Tools for creating enterprise beans and access beans
Tools for building data persistence into enterprise beans
Tools for generating deployment code
Tools for validating your enterprise beans for specification compliance
Session beans
One way to think about the application logic layer (middle tier) is as a set of objects that, together, implement the business logic of an application. Session beans are the construct in EJBs designed for this purpose. As shown in Figure 2, there may be multiple session beans in an application. Each handles a subset of the applications business logic.
A session bean tends to be responsible for a group of related functionality. For example, an application for an educational institution might have a session bean whose methods contain logic for handling student records. Another session bean might contain logic that maintains the lists of courses and programs available at that institution.
There are two types of session beans, which are defined by their use in a client interaction:

1. Stateless: These beans do not declare any instance (class-level) variables, so that the methods contained within can act only on any local parameters. There is no way to maintain state across method calls.
2. Stateful: These beans can hold client state across method invocations. This is possible with the use of instance variables declared in the class definition. The client will then set the values for these variables and use these values in other method calls.
There may be more work involved for the server to share stateful session beans than is required to share stateless beans. Storing the state of an EJB is a very resource-intensive process, so an application that uses stateful beans may not be easily scalable. Stateless session beans provide excellent scalability, because the EJB container does not need to keep track of their state across method calls.
All EJBs, session beans included, operate within the context of an EJB server. An EJB server contains constructs known as EJB containers, which are responsible for providing an operating environment for managing and providing services to the EJBs that are running within it. In a typical scenario, the user interface (UI) of an application calls the methods of the session beans as it requires the functionality that they provide. Session beans can call other session beans and entity beans.
Entity beans
Before object orientation became popular, programs were usually written in procedural languages and often employed relational databases to hold the data. Because of the strengths and maturity of relational database technology, it is now often advantageous to develop object-oriented applications that use relational databases. The problem with this approach is that there is an inherent difference between object-oriented and relational database technologies, making it less than natural for them to coexist in one application. The use of entity beans is one way to get the best of both of these worlds, for the following reasons:
1. Entity beans are objects, and they can be designed using object-oriented principles and used in applications as objects.
2. The data in these entity bean objects are persisted in some data store, usually relational databases. All of the benefits of relational technologies—including maturity of products, speed, reliability, ability to recover, and ease of querying—can be leveraged.
In a typical EJB scenario, when a session bean needs to access data, it calls the methods of an entity bean. Entity beans represent the persistent data in an EJB application. For example, an application for an educational institution might have an entity bean named Student that has one instance for every student that is enrolled in an institution. Entity beans, often backed by a relational database, read and write to tables in the database. Because of this, they provide an object-oriented abstraction to some information store.

EJB Deployment
The EJB deployment process tends to vary between application servers for several reasons. First, the EJB specification provides minimal guidance in describing how you deploy EJBs. Therefore, vendors adhere to the J2EE/EJB specification to varying degrees; they use their own discretion in defining the deployment process for their specific platforms.
Second, since deployment is an area where application server vendors can add value and uniqueness to their product, they are not motivated to standardize the process. In addition, this lack of standardization presents opportunities for application server and Java IDE vendors to integrate their toolset offerings to facilitate EJB development and deployment a process only facilitated if your preferred IDE supports your targeted application server. Nevertheless, the process for deploying a given EJB must be taken on a case-by-case basis, and is usually unique to each application server.
To illustrate this, we experimented with two EJB implementations, a Lightweight Directory Access Protocol (LDAP) stateless session bean and a simple container-managed persistence (CMP) entity bean. In this article, we identify some of the major differences associated with deploying the same EJB code in WebLogic, WebSphere, and JBoss application servers. One notable caveat: we did as much as possible by hand (we wrote our code and deployment descriptors, and created the jar files using vi/Emacs and the JDKs jar utility), all the while recognizing that EJB deployment may go smoother if we use enterprise development tools closely aligned with a specific application server vendor (e.g. VisualAge for Java used in conjunction with WebSphere).
CORBA
The Common Object Request Broker Architecture (CORBA) is a standard defined by the Object Management Group (OMG) designed to facilitate the communication of systems that are deployed on diverse platforms. CORBA enables collaboration between systems on different operating systems, programming languages, and computing hardware. CORBA has many of the same design goals as object-oriented programming: encapsulation and reuse. CORBA uses an object-oriented model although the systems that utilize CORBA do not have to be object-oriented. CORBA is an example of the distributed object paradigm.

CORBA enables software written in different languages and running on different computers to work with each other seamlessly. Implementation details from specific operating systems, programming languages, and hardware platforms are all removed from the responsibility of developers who use CORBA. CORBA normalizes the method-call semantics between application objects residing either in the same address-space (application) or in remote address-spaces (same host, or remote host on a network). Version 1.0 was released in October 1991.
CORBA uses an interface definition language (IDL) to specify the interfaces which objects present to the outer world. CORBA then specifies a mapping from IDL to a specific implementation language like C++ or Java. Standard mappings exist for Ada, C, C++, C++11, Lisp, Ruby, Smalltalk, Java, COBOL, PL/I and Python. There are also non-standard mappings for Perl, Visual Basic, Erlang, and Tcl implemented by object request brokers (ORBs) written for those languages.
The CORBA specification dictates there shall be an ORB through which an application would interact with other objects. This is how it is implemented in practice: 1. The application simply initializes the ORB, and accesses an internal Object Adapter, which maintains things like reference counting, object (and reference) instantiation policies, and object lifetime policies. 2. The Object Adapter is used to register instances of the generated code classes. Generated code classes are the result of compiling the user IDL code, which translates the high-level interface definition into an OS- and language-specific class base for use by the user application. This step is necessary in order to enforce CORBA semantics and provide a clean user process for interfacing with the CORBA infrastructure.
Some IDL mappings are more difficult to use than others. For example, due to the nature of Java, the IDL-Java mapping is rather straightforward and makes usage of CORBA very simple in a Java application. This is also true of the IDL to Python mapping. The C++ mapping is notoriously difficult; the mapping requires the programmer to learn complex and confusing datatypes that predate the C++ Standard Template Library (STL). By contrast, the C++11 mapping is very easy to use, as it uses the STL heavily. Since the C language is not object-oriented, the IDL to C mapping requires a C programmer to manually emulate object-oriented features.
The following describes some of the most significant ways that CORBA can be used to facilitate communication among distributed objects.

Objects By Reference
This reference is either acquired through a stringified Uniform Resource Identifier (URI), NameService lookup (similar to Domain Name System (DNS)), or passed-in as a method parameter during a call.
Object references are lightweight objects matching the interface of the real object (remote or local). Method calls on the reference result in subsequent calls to the ORB and blocking on the thread while waiting for a reply, success or failure. The parameters, return data (if any), and exception data are marshaled internally by the ORB according to the local language and OS mapping.
Data By Value
The CORBA Interface Definition Language provides the language- and OS-neutral inter-object communication definition. CORBA Objects are passed by reference, while data (integers, doubles, structs, enums, etc.) are passed by value. The combination of Objects-by-reference and data-by-value provides the means to enforce strong data typing while compiling clients and servers, yet preserve the flexibility inherent in the CORBA problem-space.
Objects By Value (OBV)
Apart from remote objects, the CORBA and RMI-IIOP define the concept of the OBV and Valuetypes. The code inside the methods of Valuetype objects is executed locally by default. If the OBV has been received from the remote side, the needed code must be either a priori known for both sides or dynamically downloaded from the sender. To make this possible, the record, defining OBV, contains the Code Base that is a space-separated list of URLs from where this code should be downloaded. The OBV can also have the remote methods.
CORBA Component Model (CCM)
CORBA Component Model (CCM) is an addition to the family of CORBA definitions. It was introduced with CORBA 3 and it describes a standard application framework for CORBA components. Though not dependent on "language dependent Enterprise Java Beans (EJB)", it is a more general form of EJB, providing four component types instead of the two that EJB defines. It provides an abstraction of entities that can provide and accept services through well-defined named interfaces called ports.
The CCM has a component container, where software components can be deployed. The container offers a set of services that the components can use. These services include (but are not limited to) notification, authentication, persistence and transaction processing. These are the most-used services any distributed system requires, and, by moving the implementation of these services from the software components to the component container, the complexity of the components is dramatically reduced.

Benefits
CORBAs benefits include language- and OS-independence, freedom from technology-linked implementations, strong data-typing, high level of tunability, and freedom from the details of distributed data transfers.
Language independence
CORBA at the outset was designed to free engineers from the hang-ups and limitations of considering their designs based on a particular software language. Currently there are many languages supported by various CORBA providers, the most popular being Java and C++. There are also C++11, C-only, SmallTalk, Perl, Ada, Ruby, and Python implementations, just to mention a few.
OS-independence
CORBAs design is meant to be OS-independent. CORBA is available in Java (OS-independent), as well as natively for Linux/Unix, Windows, Solaris, OS X, OpenVMS, HPUX, Android, and others.
Freedom from technologies
One of the main implicit benefits is that CORBA provides a neutral playing field for engineers to be able to normalize the interfaces between various new and legacy systems. When integrating C, C++, Object Pascal, Java, Fortran, Python, and any other language or OS into a single cohesive system design model, CORBA provides the means to level the field and allow disparate teams to develop systems and unit tests that can later be joined together into a whole system. This does not rule out the need for basic system engineering decisions, such as threading, timing, object lifetime, etc. These issues are part of any system regardless of technology. CORBA allows system elements to be normalized into a single cohesive system model.
For example, the design of a multitier architecture is made simple using Java Servlets in the web server and various CORBA servers containing the business logic and wrapping the database accesses. This allows the implementations of the business logic to change, while the interface changes would need to be handled as in any other technology. For example, a database wrapped by a server can have its database schema change for the sake of improved disk usage or performance (or even whole-scale database vendor change), without affecting the external interfaces. At the same time, C++ legacy code can talk to C/Fortran legacy code and Java database code, and can provide data to a web interface.

Data-typing
CORBA provides flexible data typing, for example an "ANY" datatype. CORBA also enforces tightly coupled datatyping, reducing human errors. In a situation where Name-Value pairs are passed around, it is conceivable that a server provides a number where a string was expected. CORBA Interface Definition Language provides the mechanism to ensure that user-code conforms to method-names, return-, parameter-types, and exceptions.
High tunability
Many implementations (e.g. OmniORB (Open source C++ and Python implementation))[1] have options for tuning the threading and connection management features. Not all implementations provide the same features. This is up to the implementor.
Freedom from data-transfer details
When handling low-level connection and threading, CORBA provides a high level of detail in error conditions. This is defined in the CORBA-defined standard exception set and the implementation-specific extended exception set. Through the exceptions, the application can determine if a call failed for reasons such as "Small problem, so try again", "The server is dead" or "The reference does not make sense." The general rule is: Not receiving an exception means that the method call completed successfully. This is a very powerful design feature.
Compression
CORBA marshals its data in a binary form and supports compression. IONA, Remedy IT, and Telefónica have worked on an extension to the CORBA standard that delivers compression. This extension is called ZIOP and this is now a formal OMG standard.
Structure of CORBA IDL
The CORBA Interface Definition Language (IDL) is used to define interfaces to objects in your network. This chapter introduces the features of CORBA IDL and illustrates the syntax used to describe interfaces. The first step in developing a CORBA application is to define the interfaces to the objects required in your distributed system. To define these interfaces, you use CORBA IDL.
IDL allows you to define interfaces to objects without specifying the implementation of those interfaces. To implement an IDL interface you must:
1. Define a Java class that can be accessed through the IDL interface.
2. Create objects of that class within an Orbix Java server application.
You can implement IDL interfaces using any programming language for which an IDL mapping is available. An IDL mapping specifies how an interface defined in IDL corresponds to an implementation defined in a programming language. CORBA applications written in different programming languages are fully interoperable. CORBA defines standard mappings from IDL to several programming languages, including C++, Java, and Smalltalk. The Orbix Java IDL compiler converts IDL definitions to corresponding Java definitions, in accordance with the standard IDL to Java mapping.
CORBA interface repository

The Interface Repository is based on the CORBA definition of an Interface Repository. It offers a proper subset of the interfaces defined by CORBA; that is, the APIs that are exposed to programmers are implemented as defined by the Common Object Request Broker: Architecture and Specification Revision 2.4. However, not all interfaces are supported. In general, the interfaces required to read from the Interface Repository are supported, but the interfaces required to write to the Interface Repository are not. Additionally, not all TypeCode interfaces are supported.
Administration of the Interface Repository is done using tools specific to the Oracle Tuxedo software. These tools allow the system administrator to create an Interface Repository, populate it with definitions specified in Object Management Group Interface Definition Language (OMG IDL), and then delete interfaces. Additionally, an administrator may need to configure the system to include an Interface Repository server. For a description of the Interface Repository administration commands, see the Oracle Tuxedo Command Reference and Setting Up an Oracle Tuxedo Application.
Several abstract interfaces are used as base interfaces for other objects in the Interface Repository. A common set of operations is used to locate objects within the Interface Repository. These operations are defined in the abstract interfaces IRObject, Container, and Contained described in this chapter. All Interface Repository objects inherit from the IRObject interface, which provides an operation for identifying the actual type of the object. Objects that are containers inherit navigation operations from the Container interface. Objects that are contained by other objects inherit navigation operations from the Contained interface. The IDLType interface is inherited by all Interface Repository objects that represent OMG IDL types, including interfaces, typedefs, and anonymous types. The TypedefDef interface is inherited by all named noninterface types.
The Interface Repository consists of two distinct components: the database and the server. The server performs operations on the database. The Interface Repository database is created and populated using the idl2ir administrative command. For a description of this command, see the Oracle Tuxedo Command Reference and Setting Up an Oracle Tuxedo Application. From the programmer’s point of view, there is no write access to the Interface Repository. None of the write operations defined by CORBA are supported, nor are set operations on nonread-only attributes.
Read access to the Interface Repository database is always through the Interface Repository server; that is, a client reads from the database by invoking methods that are performed by the server. The read operations as defined by the CORBA Common Object Request Broker: Architecture and Specification.

CORBA Services and CORBA Component Model
CORBA Services
Another important part of the CORBA standard is the definition of a set of distributed services to support the integration and interoperation of distributed objects. As depicted in the graphic below, the services, known as CORBA Services or COS, are defined on top of the ORB. That is, they are defined as standard CORBA objects with IDL interfaces. As such, the services are sometimes referred to as "Object Services."
COM and .NET
COM and .NET are complementary development technologies. The .NET Common Language Runtime provides bi-directional, transparent integration with COM. This means that COM and .NET applications and components can use functionality from each system. This protects your existing investments in COM applications while allowing you to take advantage of .NET at a controlled pace. COM and .NET can achieve similar results. The .NET Framework provides developers with a significant number of benefits including a more robust, evidence-based security model, automatic memory management and native Web services support. For new development, Microsoft recommends .NET as a preferred technology because of its powerful managed runtime environment and services.

Marshalling
In computer science, marshalling or marshaling is the process of transforming the memory representation of an object to a data format suitable for storage or transmission, and it is typically used when data must be moved between different parts of a computer program or from one program to another. Marshalling is similar to serialization and is used to communicate to remote objects with an object, in this case a serialized object. It simplifies complex communication, using custom/complex objects to communicate instead of primitives. The opposite, or reverse, of marshalling is called unmarshalling (or demarshalling, similar to deserialization).
Marshalling is used within implementations of different remote procedure call (RPC) mechanisms, where it is necessary for transporting data between processes and/or between threads. In Microsofts Component Object Model (COM), interface pointers must be marshalled when crossing COM apartment boundaries (that is, crossing between instances of the COM library). In the .NET Framework, the conversion between an unmanaged type and a CLR type, as in the P/Invoke process, is also an example of an action that requires marshalling to take place.
Additionally, marshalling is used extensively within scripts and applications that utilize the XPCOM technologies provided within the Mozilla application framework. The Mozilla Firefox browser is a popular application built with this framework that additionally allows scripting languages to use XPCOM through XPConnect (Cross-Platform Connect).
Custom Marshalling
Whenever you want complete control over everything that happens in the entire communication process, you should use custom marshalling. With this scheme, nothing comes for free and you will have to implement the marshalling process yourself in its entirety. You could want to use custom marshalling, if you will be using some other transport mechanism for communication than the usual RPC mechanism, e.g. named pipes, HTTP or raw TCP/IP.
With standard marshalling, COM provides you with an implementation of the IMarshal interface, but with custom marshalling, IMarshal with its 6 member functions is – apart from the proxy/stub DLL and CoClass implementation - what you will have to implement yourself. The functions of IMarshal will be called by the COM API function CoMarshalInterface in order to package up the parameters and get them on their way. In reality, implementing custom marshalling is probably too much work to be practically useful. This is also the main reason why we have not dealt with this kind of marshalling.
Standard Marshalling
Sometimes the restrictions of type library marshalling are next to unacceptable. This is the case when you need more sophisticated structures than can be provided with safearrays or when you are trying to turn legacy code (e.g. a huge C library J ) into a COMponent.
In this case you will have to provide your own proxy/stub DLL for marshalling the structures. The traditional way of defining an interface is to use IDL and compile the file with Microsoft’s IDL compiler, MIDL. Based on the interface description, MIDL can generate a type library (.TLB), and the necessary C files for producing the proxy/stub DLL.
In Delphi 3.x, a type library is defined using a graphical tool, known as the Type Library Editor. This will automatically generate a type library, Pascal wrappers for the interface definition – akin to those listed in the appendices - and a skeleton implementation of the associated CoClass. Unfortunately, neither IDL structs or unions are supported in the editor.
Service Oriented architecture(SOA)
Service-oriented architecture (SOA) is a software design and software architecture design pattern based on distinct pieces of software providing application functionality as services to other applications. This is known as service-orientation. It is independent of any vendor, product or technology.
A service is a self-contained unit of functionality, such as retrieving an online bank statement. Services can be combined by other software applications to provide the complete functionality of a large software application. SOA makes it easy for computers connected over a network to cooperate. Every computer can run an arbitrary number of services, and each service is built in a way that ensures that the service can exchange information with any other service in the network without human interaction and without the need to make changes to the underlying program itself.
Services are unassociated, loosely coupled units of functionality that are self-contained. Each service implements at least one action, such as submitting an online application for an account, retrieving an online bank statement or modifying an online booking or airline ticket order. Within an SOA, services use defined protocols that describe how services pass and parse messages using description metadata, which in sufficient details describes not only the characteristics of these services, but also the data that drives them. Programmers have made extensive use of XML in SOA to structure data that they wrap in a nearly exhaustive description-container. Analogously, the Web Services Description Language (WSDL) typically describes the services themselves, while SOAP (originally Simple Object Access Protocol) describes the communications protocols.

design concept
SOA is based on the concept of a service. Depending on the service design approach taken, each SOA service is designed to perform one or more activities by implementing one or more service operations. As a result, each service is built as a discrete piece of code. This makes it possible to reuse the code in different ways throughout the application by changing only the way an individual service interoperates with other services that make up the application, versus making code changes to the service itself. SOA design principles are used during software development and integration.
SOA generally provides a way for consumers of services, such as web-based applications, to be aware of available SOA-based services. For example, several disparate departments within a company may develop and deploy SOA services in different implementation languages; their respective clients will benefit from a well-defined interface to access them. SOA defines how to integrate widely disparate applications for a Web-based environment and uses multiple implementation platforms. Rather than defining an API, SOA defines the interface in terms of protocols and functionality. An endpoint is the entry point for such a SOA implementation.
Service-orientation requires loose coupling of services with operating systems and other technologies that underlie applications. SOA separates functions into distinct units, or services, which developers make accessible over a network in order to allow users to combine and reuse them in the production of applications. These services and their corresponding consumers communicate with each other by passing data in a well-defined, shared format, or by coordinating an activity between two or more services. For some, SOA can be seen in a continuum from older concepts of distributed computing and modular programming, through SOA, and on to current practices of mashups, SaaS, and cloud computing (which some see as the offspring of SOA).
Enterprise Service Bus (ESB)
An enterprise service bus (ESB) is a software architecture model used for designing and implementing communication between mutually interacting software applications in a service-oriented architecture (SOA). As a software architectural model for distributed computing it is a specialty variant of the more general client server model and promotes agility and flexibility with regards to communication between applications. Its primary use is in enterprise application integration (EAI) of heterogeneous and complex landscapes.

The concept has been developed in analogy to the bus concept found in computer hardware architecture combined with the modular and concurrent design of high-performance computer operating systems. Motivation was to find a standard, structured and general purpose concept for describing implementation of loosely-coupled software components (called: services) that are expected to be independently deployed, running, heterogeneous and disparate within a network. ESB is also the intrinsically adopted network design of the World Wide Web and the common implementation pattern for service oriented architecture.
An ESB generally provides an abstraction layer on top of an implementation of an enterprise messaging system. In order for an integration broker to be considered a true ESB, it would need to have its base functions broken up into their constituent and atomic parts. The atomic components would then be capable of being separately deployed across the bus while working together in harmony as necessary.
Web Services Technologies
A Web service is a set of related application functions that can be programmatically invoked over the Internet. Businesses can dynamically mix and match Web services to perform complex transactions with minimal programming. Web services allow buyers and sellers all over the world to discover each other, connect dynamically, and execute transactions in real time with minimal human interaction.
The web is vast inter connected global information system. Information on the web is hosted on web sites which contain text, pictures and multimedia.This information can be viewed using browsers like internet explorer, firefox and lynx. These web sites are indexed by search engines which enable us to find required information. Popular search engines are Yahoo, Google and Msn. A web service is also hosted on a web server. We can access a web service programmatically, only programmatically and not by a browser. So, a web service uses the web to be accessed and used.
XML
XML is a mark up language. With a mark up language, we can structure a document using tags. Using XML, we can customize the tags also. Each bit of information in a document is defined by tags without overload of formatting present in HTML. This type of representation is suitable for application-to-application communication. Another feature of XML is that the vocabulary can be extended. Vocabulary refers to the types of tags used to structure a document in XML.
For example, you can create a tag called and use it to represent contact information in an electronic document. This extended vocabulary is supplied to the receiving application using what is called as the XML schema. XML Schema is a W3C.org standard. The XML schema is used by the receiving application to validate the document being received.
Many a times an application needs to transform the XML document being received into a different format. This is done by using XSLT by the receiving application. XMLs transformability is one of the reasons Web services work well for building portals. As shown in Figure, backend Web services deliver content to the portal in XML format. The portal presentation logic, called a portlet can transform the content as needed to work with the portal user interface, which might be a frame within a browser, a portable handset, or a standard telephone (using voice generation and voice recognition software). XML supports multichannel portal applications.
SOA
SOA stands for service-oriented architecture. SOA is a concept and describes a set of well- established patterns. Each pattern represents a mechanism to describe a service, to advertise and discover a service, and to communicate with a service. Web services rely on these patterns and client applications connect to a service using these patterns.
In the SOA concept, three basic roles are defined. They are :
Service provider - who develops or supplies the service.
Service consumer - who uses the service.
The service broker facilitates the advertising and discovery process.
The three basic operations in the SOA are register, find, and bind. The service provider registers the service with a service broker. A service consumer queries the service broker to find a compatible service. The service broker gives the service consumer directions on how to find the service and its service contract The service consumer uses the contract to bind the client to the service, at which point the client and service can communicate. The standard technologies for implementing the SOA patterns with Web services are Web Services Description Language (WSDL), Universal Description, Discovery & Integration (UDDI).and Simple Object Access Protocol (SOAP).
WSDL, UDD1, and SOAP are the three core technologies most often used to implement Web services. WSDL provides a mechanism to describe a Web service. UDDI provides a mechanism to advertise and discover a Web service. And SOAP provides a mechanism for clients and services to communicate. Figure 3-4 shows these technologies mapped to the SOA.
WSDL is an XML language that describes a Web service. A WSDL document describes
1.The functionality a Web service,
2.How the web service communicates,
3.Where the web service resides.

The WSDL document contains three parts which describes everything needed to call a Web service. This docusment / file can be compiled into application code, which a client application uses to access the Web service. This application code is called a client proxy. This means that WSDL document / file compiles into a proxy client. The client application calls the client proxy, and the proxy constructs the messages and manages the communication on behalf of the client application.
