While Service Oriented Architecture (SOA) appeared in the 90s as a programming paradigm buzzword term, engineers like Erik Townsend established and promoted the underlying concepts during the early 80s. In fact, it remains somewhat unclear as to exactly when and where SOA coalesced as a methodology and who gets the credit. Many hold that the Gartner Group ‘invented’ SOA; however, Townsend would tell you that they merely formalized the trend in the 90s (Townsend).

Regardless of how it came to be, SOA grew from a notion to break down functions in a large organization into services that seamlessly work together; the earliest example related by Townsend involved ‘islands of automation’ within DEC in 1982 (Townsend). During the 90s, the idea grew into a programming paradigm that describes services as self-contained modules surrounded by exhaustive metadata about what the service does in an organization’s overall solution architecture; for example check processing or account creation modules, i.e. discrete functions. SOA knits those services together via connections or messages sent between service providers and service consumers (Barry). In other words, consumers (users) within an organization can take advantage of services offered within an organization’s solution without any knowledge of how the individual services communicate or how the development team implemented them. In an ideal SOA, consumers simply choose from a ‘palette’ of services to get their particular task done, string them together, and obtain results. The SOA takes care of all communication and integration on the back-end with the consumer none the wiser. The ARGO bank branch software support project at my employer, CTS, is the best real-world example I can think of. The engineers trained in ARGO support do almost no development; the internal ARGO development team takes care of all the inner workings and logic behind each module along with inter-module communication. Our support engineers simply choose which of the predefined modules to use on a per-branch basis and create a solution by ‘stringing’ them together logically to solve problems.

The promise of the principles behind SOA, reduced costs and increased agility, caused many organizations to jump on the bandwagon and begin large overhaul initiatives with major organizations like IBM offering SOA consulting services. However, as Anne Thomas Manes puts it, SOA ‘died’ in 1999 with the economic recession citing ‘SOA fatigue has turned into SOA disillusionment’. According to Manes, organizations have invested millions in SOA initiatives to curb costs and increase agility yet many have experienced wildly increased costs and countless failed projects. She further adds most successful organizations include SOA as part of a much larger change initiative rather than relying on SOA alone to effect the transformation. ‘Long live services’ she says so long as they exist within the overarching strategy (Manes). Counter to Manes, Doug Barry holds SOA remains relevant to organizations citing high traffic and interest on his website. Barry goes on to say most IT projects fail regardless of whether the group employed SOA or not. He also contends SOA need not be part of a larger, massive transformation. Rather, organizations should vet changes in smaller increments, prove those changes effective, and move on to larger initiatives (Barry).

With the relevance of SOA itself party to some contention in professional circles, its progeny live on in the form of newer paradigms centered around services to consumers, most notably software as a service (SaaS) and cloud computing; both modern initiatives experiencing success in the current technological climate. Even if SOA might be ‘dead’, the exercise was not in vain.

-Mike Gann (EGR644, Spring 2012)

Barry, Douglas K. Service-Oriented Architecture (SOA) Definition.
http://www.service-architecture.com/web-services/articles/service-oriented_architecture_soa_definition.html

Manes, Anne Thomas. SOA is Dead; Long Live Services.
http://apsblog.burtongroup.com/2009/01/soa-is-dead-long-live-services.html

Townsend, Erik. The 25-Year History of Service-Oriented Architecture.
http://www.eriktownsend.com/white-papers-technology/doc_view/4-1-the-25-year-history-of-service-oriented-architecture.raw

The Business Process execution language (BPEL) is an XML (eXtensible Markup Language) based language, initially developed by IBM and Microsoft to standardize business processes execution. In 2003, other contributors from BEA Systems, SAP and Siebel joined IBM and Microsoft to create BPEL4WS (BPEL for Web Services) Version 1.1. They submitted BPEL4WS to OASIS(Organization for the Advancement of Structured Information Standards) for standardization. OASIS is a global organization that drives the adoption of e-business and web service standards. BPEL Version 1.1 gathered momentum and attention from major vendors, thus leading to numerous BEPL based orchestration engines created for commercial use. The official WS-BPEL 2.0 (Web Service BPEL) was released in April 2007 under the stewardship of OASIS. Lately modern ERP (Enterprise Resource Planning) software vendors are adapting to BPEL standards and building their applications using BPEL engines. One example is Oracle’s new Fusion application. Oracle spent lot of time and money to convert its existing ERP applications into business process based applications by using Oracle’s BPEL engine and JDeveloper development tools to build Fusion applications.

BPEL was created in an effort to standardize business process execution. It is a standard for assembling a set of discrete services into an end-to end process flow. It provides a framework for a standard business processes and business interaction protocols. The modern e-business application performs machine to machine interaction using Web Services and it has become an industry standard to communicate between two independent computer systems over the internet. The framework used to deploy the Web Services is called SOA (Service Oriented Architecture) architecture. BPEL complements the Web Services interaction model and enables it to support business transactions. All external communication which happens in the BPEL is through Web services.

BPEL process consists of one or more web services orchestrated in a specific order in which the web services should be invoked. Web services invoke can be in a parallel or a sequential order. Conditional statements can be added to invoke the web services. We can add process looping, assign variables, exception handing etc. Using all these logics a complex business process can be constructed in an executable way.

A typical BPEL process, first receives a request, then the process invokes the orchestrated web service and gets the response from the remote system, Based on the response and condition defined, it executes the next step in the business process. Each step in the BPEL process is called an activity. Examples of some of the activities are;

  • Invoking Web Services
  • Receiving a request.
  • Generate response and reply.
  • Manipulate variables.
  • Handle exceptions
  • Waiting
  • Terminate the process. etc.

BPEL is a standard programming language like Java, but it not as powerful as Java. It is much simpler to use and well suited for business process definition. BPEL is not a replacement but it is a supplement to present programming languages like Java. BPEL do not have standard graphical notation. The OASIS committee decided to stay away from creating notation for BPEL. Some vendors have created their own notation and others have proposed to use BPMN (Business Process Model and Notation) to design and document BPEL processes. IT vendors such as Oracle developed IDE (Integrated Development Environment) tool called JDeveloper to build business processes. The IDE has pre-build functionality to develop BPEL process by drag and drop of BPEL elements/activity and connect them together based on the business process. These products are mostly used by technical developers and technical architects rather than business analysts. BPEL tools radically reduce the costs and complexity of process interaction initiatives and increase business agility.

By: Dhinakaran Gurusamy

References :

Bpel.xml.org

http://www.softcare.com/whitepapers/wp_whatis_bpel.php

http://www.theserverside.com/news/1364554/BPEL-and-Java

http:// oasis-open.org

http://en.wikipedia.org/wiki/BPEL

www.oracle.com

WHAT IS IT?

SOA is a software architecture that is based upon a specific set of design principles and is intended to be used for the automation of business processes. It is termed, “Service Oriented”, because the software modules developed with this paradigm are referred to as “Services”. In addition, the modules are intended to be combined to provide the “Services” necessary for one or more business processes.

The name “Service Oriented Architecture” was originally coined by Gartner analyst Yefim V. Natis in a 1996 research paper. (1) Unfortunately, the designation has not always been used in a consistent manner. For that reason, The Organization for the Advancement of Structured Information Standards is developing a reference model to “encourage the continued growth of different and specialized SOA implementations whilst preserving a common layer of understanding about what SOA is.” (2)

In his publications, Thomas Erl discusses eight design principles of the Service Oriented Architecture paradigm. (3)

1. Standardized Service Contracts are the mechanism used to document and communicate the information necessary to interact with a particular service.

2. Loose Coupling is the concept of minimizing the interdependencies of services.

3. Service Abstraction involves masking the inner workings of a service in a way that promotes the Loose Coupling.

4. Service Reusability refers to the principle of designing services which can be used in multiple business processes.

5. Service Autonomy is the idea that a given service will have as much control as possible over its own environment and resources.

6. Service Statelessness refers to minimizing the need for services to remember information about a previous event or interaction.

7. Service Discovery is the ability to locate and understand a service within an inventory of available services.

8. Service Composability refers to the ability to effectively couple services with one another in order to compose a complex business process solution.

WHY USE IT?

Implementation of the SOA paradigm is intended to provide specific benefits. Unfortunately, because of the nature of the architecture, much of the cost associated with the implementation is borne during the initial stages. It takes a significant amount of time and effort to develop the services required to begin the replacement of existing systems and thereby realize the benefits of SOA. In the interim IT costs are likely to increase significantly.

Ultimately, the use of SOA will result in the creation of an inventory of services. With these services available to quickly couple together, the business will be able to react more rapidly and with less cost when opportunities or challenges arise.

Once implemented properly there should be several more significant benefits to the business:

1. Reduced IT cost due to the ability to reuse a single service in multiple business processes. This should decrease the amount of redundant software logic that will be developed and maintained. There should be a reduced incidence of software errors.

2. Improved and less costly regulatory compliance due to the elimination of redundant software logic. A specific piece of business logic will be found in a single location and as a result it will be better controlled, better understood, and more easily maintained.

3. Increased flexibility and scalability due to the fact that services will not be location or platform dependent. If the hardware technology changes the relationship between the services can continue on as usual.

4. Improved alignment of business and technology – “With SOA, business managers work with IT to identify business services. Together, they determine policy and best practices. These policies and best practices become codified business services that represent honed company business processes.”(4)

RE-ENGINEERING WITH SOA.

It can be argued that SOA is a beneficial tool for use in re-engineering corporations. Because the Service Oriented Architecture results in software that can be more easily adapted to changing business requirements, the implementation of a SOA can facilitate the realignment of business units and business processes. In addition, the business analysis required by the SOA development process will provide an opportunity for a review of current business processes.

Adoption of the SOA paradigm may also necessitate a “re-engineering” of the corporate IT organization and its relationship to the business units. With traditional software implementation methods a group of IT professionals will typically be responsible for the software that automates a specific business process. In contrast, individual SOA services are designed to be used in multiple business processes which may cross business unit lines. Since individual business units will not have ownership of specific SOA services they may be less inclined to include the development costs in their budgets. As a result, corporations will need to reconsider their IT structure in order to implement new funding and support models.

By: Blane McCarthy

References:

1. Rajiv Ramaratnam; “An Analysis of Service Oriented Architectures”, Massachusetts Institute of Technology, June 2007

2. http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=soa-rm

3. http://www.soaprinciples.com

4. Judith Hurwitz, Robin Bloor, Marcia Kaufman, and Dr. Fern Halper; “Service Oriented Architecture For Dummies, 2nd IBM Limited Edition”, Wiley Publishing, Inc. 2009

http://toronix.com/Documents/ToronixNewsletterSOA-ESB.pdf

Erik Townsend; “The 25 Year History of Service Oriented Architecture”, July 17, 2008

MODEL DRIVEN ARCHITECTURE

April 15, 2012

Introduction

MDA basically means using a model as a basis for development.

Model-driven architecture (MDA) is a software design approach for the development of software systems. It provides a set of guidelines for the structuring of specifications, which are expressed as models. Model-driven architecture is a kind of domain engineering, and supports model engineering of software systems. It was launched by the Object Management Group (OMG) in 2001.

Model-Driven Architecture is a philosophy of how models should be used in the software development process. Developers across the software industry are embracing this philosophy as they apply and evolve MDA principles as defined by the OMG.

Model-Driven Architecture (MDA) is a style of enterprise application development and integration, based on using automated tools to build a system-independent model and transform them into efficient implementations. The Object Management Group (OMG) has defined standards for representing MDA models, but the principles and practice of MDA are still evolving.

MDA incorporates several other OMG standards in its definition. These other standards are Unified Modeling Language (UML), Meta-Object Facility (MOF), XML Metadata Interchange (XMI), Enterprise Distributed Object Computing (EDOC), Software Process Engineering Metamodel (SPEM), and the Common Warehouse Metamodel(CWM).

MDA- Different from other

The MDA is a new way of writing specifications, based on a platform-independent model. A complete MDA specification consists of a definitive platform-independent base UML model, plus one or more platform-specific models and interface definition sets, each describing how the base model is implemented on a different middleware platform. The MDA focuses primarily on the functionality and behavior of a distributed application or system, not the technology in which it will be implemented. It divorces implementation details from business functions. Thus, it is not necessary to repeat the process of modeling an application or system’s functionality and behavior each time a new technology (e.g., XML/SOAP) comes along. Other architectures are generally tied to a particular technology. With MDA, functionality and behavior are modeled once and only once. Mapping to the supported MDA platforms will be implemented by tools, easing the task of supporting new or different technologies.

Role of UML in the MDA

UML is the key enabling technology for the Model Driven Architecture. Every application using the MDA is based on a normative, platform-independent UML model. By leveraging this universally accepted modeling standard, the MDA allows creation of applications that are portable across, and interoperate naturally across, a broad spectrum of systems from embedded, to desktop, to server, to mainframe, and across the Internet.

MDA – Cross-platform interoperability

Interfaces and implementations of a specification all derive from a common base UML model. This structure of linked models allows automated building of the bridges that connect implementations on those various middleware platforms. And, when the base model for a new specification is being designed, interoperability with other specifications and services can be designed into it.

MDA Tools:

An MDA tool is a tool that used to develop, interpret, compare, align, measure, verify, transform, etc. models or metamodels. In the following section ‘model’ is interpreted as meaning any kind of model (e.g. a UML model) or metamodel (e.g. the CWM metamodel). In any MDA approach we have essentially two kinds of models: initial models are created manually by human agents while derived models are created automatically by programs. For example an analyst may create a UML initial model from its observation of some loose business situation while a Java model may be automatically derived from this UML model by a Model transformation operation. An MDA tool may be one or more of the following types:

  • Creation Tool: A tool used to elicit initial models and/or edit derived models.
  • Analysis Tool: A tool used to check models for completeness, inconsistencies, or error and warning conditions. Also used to calculate metrics for the model.
  • Transformation Tool: A tool used to transform models into other models or into code and documentation.
  • Composition Tool: A tool used to compose (i.e. to merge according to a given composition semantics) several source models, preferably conforming to the same metamodel.
  • Test Tool: A tool used to test models as described in Model-based testing.
  • Simulation Tool: A tool used to simulate the execution of a system represented by a given model. This is related to the subject of model execution.
  • Metadata Management Tool: A tool intended to handle the general relations between different models, including the metadata on each model (e.g. author, date of creation or modification, method of creation) and the mutual relations between these models (i.e. one metamodel is a version of another one, one model has been derived from another one by a transformation, etc.)
  • Reverse Engineering Tool: A tool intended to transform particular legacy or information artifact portfolios into full-fledged models.

Benefits of using the MDA

There are many benefits to using the MDA approach, with the most important being:

  • An architecture based on the MDA is always ready to deal with yesterdays, todays and tomorrows “next big thing”.
  • The MDA will make it easier to integrate applications and facilities across middleware boundaries.
  • Domain facilities defined in the MDA by OMG’s Domain Task Forces will provide much wider interoperability by always being available on a domain’s preferred platform, and on multiple platforms whenever there is a need.

Who is responsible for the MDA?

Although the original impetus for the MDA came from OMG staff, it is now supported by the membership as demonstrated by unanimous votes of the technical representatives attending the organization’s meeting in late February, 2001. Like all the other work of the OMG, MDA was defined and will be developed by the OMG membership which includes a diverse cross-section of computer vendors, software suppliers, and many end-users. The wealth of experience contributed by these hundreds of organizations is one of the great strengths of OMG’s process, and has been put to good use in defining and refining the MDA. The initial vision was drafted in late 2000 in a paper available at http://doc.omg.org/mda, and subsequently refined with the help of many individual contributors into a technical perspective, available at http://doc.omg.org/ab/1-2-4..

MDA Concerns:

Potential concerns that have been raised with the MDA approach include:

  • Incomplete Standards: The MDA approach is underpinned by a variety of technical standards, some of which are yet to be specified.
  • Vendor Lock-in: Although MDA was conceived as an approach for achieving platform independence; current MDA vendors have been reluctant to engineer their MDA toolsets to be interoperable. Such an outcome could result in vendor lock-in for those pursuing an MDA approach.
  • Complexity: There is some complexity of mapping between the various layers.
  • Specialized Skill sets: Practitioners of MDA based software engineering are required to have a high level of expertise in their field.
  • Support in meta model to handle dynamic behavior.

Conclusion:

Model Driven Architecture provides an open, vendor-neutral approach to the challenge of business and technology change. MDA separates business and application logic from underlying platform technology. Platform independent models of an application or integrated system’s business functionality and behavior, built using UML and the other associated OMG modeling standards can be realized through the MDA on virtually any platform, open or proprietary, including Web Services, .NET, CORBA, J2EE and others. These platform independent models document the business functionality and behavior of an application separate from the technology-specific code that implements it, insulating the core of the application from technology and its relentless churn cycle while enabling interoperability both within and across platform boundaries.

References

http://www.omg.org/mda/

http://en.wikipedia.org/wiki/Common_Warehouse_Metamodel

http://en.wikipedia.org/wiki/Software_Process_Engineering_Metamodel

http://en.wikipedia.org/wiki/Enterprise_Distributed_Object_Computing

http://www.sdtimes.com/content/article.aspx?ArticleID=26807

http://msdn.microsoft.com/en-us/library/dd129514.aspx

http://msdn.microsoft.com/en-us/library/dd129873.aspx

By: Vishnu Thogaripally

Web Services

April 13, 2012

The Transition

The bursting of the dot-com bubble in the fall of 2001 marked a turning point for the web. Many people concluded that the web was over-hyped, when in fact bubbles and consequent shakeouts appear to be a common feature of all technological revolutions. Shakeouts typically mark the point at which an ascendant technology is ready to take its place at center stage. The pretenders are given the bum’s rush, the real success stories show their strength, and there begins to be an understanding of what separates one from the other. (1)

“Advance technologies, the disappearance of boundaries between national markets, and the altered expectations of customers who now have more choices than ever before have combined to make goals, methods, and basic organizing principles of classic organizations obsolete.”(2)

The advent of Web Services has provided the breaking of such boundaries. We see the business model moving to application services, which replace the previous paradigm of licensed software tied to singe devices. One of the defining characteristics of the Internet era software is that it is delivered as a service, not a product. Platforms are created uses Web 2.0 design standards or guidelines. Databases are the new software base, accessed by web services and available to many users and enterprises. Examples are databases developed for Google Maps, Goggle Gmail, Amazon, MapQuest, and eBay

What are Web services

The term Web services describes a standardized way of integrating Web-based applications using the XML, SOAP, WSDL and UDDI open standards over an Internet protocol backbone. XML is used to tag the data, SOAP is used to transfer the data, WSDL is used for describing the services available and UDDI is used for listing what services are available. Used primarily as a means for businesses to communicate with each other and with clients, Web services allow organizations to communicate data without intimate knowledge of each other’s IT systems behind the firewall.

Unlike traditional client/server models, such as a Web server/Web page system, Web services do not provide the user with a GUI. Web services instead share business logic, data and processes through a programmatic interface across a network. Developers can add the Web service to a GUI (such as a Web page or an executable program) to offer specific functionality to users.

Web services allow different applications from different sources to communicate with each other without time-consuming custom coding, and because all communication is in XML, Web services are not tied to any one operating system or programming language. For example, Java can talk with Perl; Windows applications can talk with UNIX applications. Web services do not require the use of browsers or HTML. Web services are sometimes called application services. (3)

Web services architecture.

A Web service is a method of communication between two electronic devices over the web. The World Wide Web Consortium (W3C) is an international community where Member organizations, a full-time staff, and the defines a "Web service" as "a software system designed to support interoperable machine-to-machine interaction over a network". It has an interface described in a machine-processable format (specifically Web Services Description Language, known by the acronym WSDL). Other systems interact with the Web service in a manner prescribed by its description using SOAP messages, typically conveyed using HTTP with an XML serialization in conjunction with other Web-related standards."

The W3C also states, "We can identify two major classes of Web services, REST-compliant Web services, in which the primary purpose of the service is to manipulate XML representations of Web resources using a uniform set of "stateless" operations; and arbitrary Web services, in which the service may expose an arbitrary set of operations."(4)

Web services refer to a set of programming standards used to make different types of software talk to each other over the Internet, without human intervention.
Web services share three types of computer programming: Extensible Markup Language (XML), Standard Object Access Protocol (SOAP), and Web Services Definition Language (WSDL). XML is sort of the Esperanto of Web services. SOAP is sort of a virtual envelope for computer code that acts like an introductory letter, saying what’s inside and where it should go. And WSDL is the nifty little code that allows different types of software talk to directly each other. That’s the real promised land for Web services — software interacting without humans getting in the way.(3)

XML also has dozens of subsets that address issues specific to different industries such as banking, retailing, and even the computer industry itself. (3)

Every significant Internet application to date has been backed by a specialized database: Google’s web crawl, Yahoo!’s directory (and web crawl), Amazon’s database of products, eBay’s database of products and sellers, MapQuest’s map databases, Napster’s distributed song database. "SQL is the new HTML." Database management is a core competency of Web 2.0 companies, so much so that we have sometimes referred to these applications as "infoware" rather than merely software. (1)

The Web is no longer a collection of static pages of HTML that describe something in the world. Increasingly, the Web is the world—everything and everyone in the world casts an “information shadow,” an aura of data which, when captured and processed intelligently, offers extraordinary opportunity and mind-bending implications. (5)

In our discussion we began with the description of Web Services, then considered what fostered the new era changes, then mentioned that Web 2.0 standards help expands a new collective platform of services which is the ever evolving web of tomorrow.

By: Franklin G. Brown

April 11, 2012

Resources:

(1) Design Patterns and Business Models for the Next Generation of Software

http://oreilly.com/lpt/a/6228; by Tim O’Reilly, 09/30/2005

(2) Reengineering the Corporation; Michael Hammer and James Champy

Harper, C/R 2001,2003

(3) Webopedia; Web services

http://www.webopedia.com/TERM/W/Web_Services.html

(4) From Wikipedia; Web service

http://en.wikipedia.org/wiki/Web_service

(5) Web Squared:Web 2.0 Five Years On; By Tim O’Reilly and John Battelle; Oct. 2009

http://assets.en.oreilly.com/1/event/28/web2009_websquared-whitepaper.pdf

What is XPDL?

XPDL is a standardized format that allows graphical and semantic data to be interchanged among different workflow products. In its early years, XPDL was known as an execution language where it competed with the Business Process Execution Language (BPEL), but in modern times its better known as the industry standard interchange language that competes with the Business Process Definition Metamodel (BPDM).

To better understand how XPDL simplifies data translation between products we should take a moment to understand the framework that it’s build on, the Extensible Markup Language (XML). XML is known as a markup language that allows you to define tags for data that will be stored, manipulated, or presented. One of the benefits of using a markup language is that you can reference data by the tags and not by the contents or location of the data. This means that one person can display some information in a manner that they choose (<year> <title> <publisher>), and that someone else can reference or interpret the same data in a different way (<publisher> <title> <year>). Since we are able to define a common interpretation for the data, we can each customize and supplement the data to our own needs. This makes XML ideal for translating data between multiple products as long as a common reference is defined. But the best part about XPDL is that it isn’t so rigid that you can’t customize it to suit your own needs. "The Workflow Management Coalition acknowledges the fact that workflow languages use different styles and paradigms. To accommodate this, XPDL allows for vendor specific extensions of the language" [1].

So what makes XPDL better than competing standards?

In the execution language realm, what sets XPDL apart from many of its competitors is the way that graphical data (X, Y coordinates) are preserved so that diagrams actually look the same in multiple products [3] Thus making XPDL better suited for the translation and representation of BPMN diagrams. In the realm of interchange languages, XPDL has existed for over 14 years and it has done well in supporting multiple vendors with custom versions while maintaining a core that’s standardized to allow for simple translation between products.

What are some flaws with XPDL?

"Owing to fundamental differences in graph-oriented graphical and block-oriented execution standards, the quality of transformation of the interchange standards is limited by different syntax and structures. For instance, a cyclical and temporal implication in a graphical standard cannot be easily transformed into an execution standard. The translation of recursive capabilities from an execution standard to a graphical standard is an even more challenging task. Currently in the industry, translation from graphical to execution is easier than that from execution to graphical standards. This applies to XPDL and even BPDM." [2]. To put it simply, the biggest flaw in XPDL is that the translation isn’t always perfect and that some of the more complex structures just don’t convert well. The difficulty in understanding a statement like this is that structures like loops and recursion are not new to the computing world and that computers have no difficulty interpreting them so drawing them should not be an issue.

Summary

XPDL’s purpose is to promote data portability and interoperability by translating business process definitions between multiple programs. In doing so, it preserves the graphic and syntactical elements so that process definitions look and behave the same in other applications. It was first created 14 years ago and it was last released in 2008.

Is XPDL still relevant? If you look for information on XPDL through a search engine, you may discover that it’s difficult to find results that are current or reference a product that If you’ve heard of. But if you visit the Workflow Management Group’s website and look at the section related to implementations of XPDL, you might notice that it’s implemented in many products and workflow architectures. Many of the products that implement XPDL were authored years ago and under older versions of the standard, but some of them do support the newer version 2.1. It may be a few years old, but XPDL still appears to be relevant and until there’s a successor it will remain to be the most common interchange language standard.

LEAN

March 30, 2012

The recent economic crisis that we find ourselves in has made it crystal clear that organizations have to be willing to change and improve if they hope to prosper and in some cases, survive. Because of the tough economic times we find ourselves in, customers are demanding better quality,delivery, and lower costs like never before. Lean is an Operational Excellence strategy that allows you to change for the better. In fact, the Japanese word Kazan means to change for the better. The true spirit of Lean is to work with a slow and steady purpose instead of quickly and recklessly. Another common definition is that Lean is the persistent pursuit and elimination of waste. Waste is considered to be any activity that is done but provides no real “value” to the product or service. Lean is not only about attacking waste, Lean is also very focused on improving the quality of products and stability of processes.

· History of Lean

It is a common misconception that Lean thinking started in Japan by the founders of Toyota. In 1574, King Henry III watched the vettage arsenal finish gala ships every hour using continuous flow processes. In 1910, Henry Ford moved his operations of his American empire, Ford Motor Company, to Highland Park. Due to the continuous flow of massive parts throughout the factory, it is often referred to as the birthplace of Lean manufacturing. One year later in 1911, Sakichi Toyoda traveled to the U.S. from Japan to study Ford’s revolutionary way to produce the model T. Shortly after this visit, Toyoda began to conceptualize what we now call the Toyota Production System(1).

As the Toyota Production System(TPS) matured and Toyota began to excel as a corporation, the rest of the world began to take notice. In 1975, the TPS was translated to English, enabling for the first time, non-Japanese speaking individuals the opportunity to learn about this operation system.

In 1990, a group of American researchers, led by Dr. James Womack, traveled the world to study the various manufacturing processes in use. They concluded that Toyota was by far the most efficient automotive company in the world. It was at this time that one Dr. Womack’s research assistants actually coined the phrase “Lean Manufacturing.” The term was then released to the world when Dr. Womack’s book “The Machine That Changed the World” was released to the public.

In present day, Lean is also spread to many others areas besides manufacturing environments. In fact Lean can be found in Office environments, where things such as reducing the time it takes to produce customer orders is very common. Another area is in hospitals, in things like reducing errors and the time it takes to find critical supplies. This has added tremendous value. The Military and Postal Service have also been known to utilize their respective forms of Lean processes in their work environment.

· Tools of Lean

The most popular Lean tool used today is 5S. When translated to English from Japanese, 5S stands for sort, straighten, shine, standardize and sustain. The purpose of 5S is to be able to identify abnormalities immediately. Another Lean tool is Value Stream Mapping which helps organizations “see” waste like never before. Another powerful Lean tool is Cellular Manufacturing, where product is past in a balanced manner one piece at time.

The ideal condition for all Lean companies is to receive orders at the start of the process and quickly flow the product through all of the processing steps with no delay. However that is not always the case. In some cases continuous flow is not always possible so that is when the concept of “Pull” is implemented. This basically means that an individual work area will not start production until a downstream process or customer tells it to. Other Lean tools that have been implanted in companies include, Andon Lamps, A3 Thinking, Practical Problem Solving, Error Proofing, 3P, Visual Controls, Supplier Development, Supermarkets, and Water Spiders(2).

· Philosophies of Lean

One of the philosophies of Lean is Kaizen. Kaizen is way of thinking that basically asks a question, “How can we improve something today?” Tha Kaizen mindest is one that never settles for good. Instead it’s always focused on finding a better way, even if it is just a little bit better. Another philosophy is Genchi Genbutsu, which literally means go and see what the problem is at the place where the work is done. In other words, if there is a problem on the production floor, the management team shouldn’t try to solve it from a board room, yet they should go the place where the work is done to see what the issue is with their own eyes(1).

Finally, the idea of learning from your failures is very important. In order to succeed at Lean, or any other improvement process for that matter, an organization must be willing to try and fail from time to time. Since Learning from these failures will be the most powerful teacher of all.

By: Wesley D. Sims

Resources:

1. Gemba Academy – Introduction to Lean Manufacturing

www.gembaacademy.com

2. http://en.wikipedia.org/wiki/Lean_manufacturing

3. http://etd.library.pitt.edu/ETD/available/etd-05282003-114851/unrestricted/Abdullah.pdf

Lean Six Sigma

March 30, 2012

I was told I was directed to a Lean Six Sigma posting……

Yes, and you were told correctly. We will begin the discussion of what Lean Six Sigma is after we discuss the history and facts of where Lean Six Sigma came from as a methodology. We will find out a little more about Six Sigma, as well as Lean.

What Is Six Sigma?

The objective of Six Sigma Quality is to reduce process output variation so that on a long-term basis, which is the customer’s aggregate experience with our process over time, this will result in no more than 3.4 defect Parts per Million (PPM) opportunities (or 3.4 defects per million opportunities – DPMO). For a process with only one specification limit (upper or lower), this results in six process standard deviations between the mean of the process and the customer’s specification limit (3).

Six Sigma focuses on continuous process improvement to reduce variation in existing processes. Engineers within a company have standards to follow when analyzing current process improvements. As standard, Six Sigma breaks down into five measurements which helps identify questions that need to be answered in order to improve overall production.

1. Define opportunity

2. Measure performance

3. Analyze opportunity

4. Improve performance

5. Control performance

History of Six Sigma, please………

The history of the term Six Sigma was phrased by an engineer named Bill Smith at Motorola. Motorola has a federally registered trademark for the term Six Sigma. However, a German mathematician and scientist named Carl Frederick Gauss started Six Sigma’s routes. Carl Frederick Gauss discovered a complicated equation that reveals eight degrees of separate from the Earth to planet Ceres. Gauss’s method involved determining a conic section in space, given one focus (the sun); the conic’s intersection with three given lines, and given the time it takes the planet to traverse the arcs determined by these lines. This problem leads to an equation of the eighth degree, of which one solution, the Earth’s orbit, is known. The solution sought is then separated from the remaining physical conditions (1). Even though Carl Frederick Gauss created the equation of separated degrees, he did not have mathematical formula to create a method of measuring those degrees in a user-friendly form.

What is Lean Manufacturing?

Lean manufacturing is a production practice that considers the expenditure of resources for any goal other than the creation of value for the end customer to be wasteful, and thus a target for elimination. Working from the perspective of the customer who consumes a product or service, "value" is defined as any action or process that a customer would be willing to pay for (4).

Essentially, Lean is centered on preserving value through less work. Lean manufacturing is a management philosophy derived mostly from the Toyota, a well-known auto manufacturer (4). Toyota developed Lean by cutting seven different wastes from its overall production, which improved overall satisfaction to customers. The seven waste cut were:

1. Transportation

2. Inventory

3. Motion

4. Waiting

5. Over-processing

6. Over-production

7. Defects

Similar to Six Sigma, Lean focuses on continuous process improvement by eliminating waste in an existing process. As standard, Lean breaks down into five measurements which helps identify questions that need to be answered in order to improve overall production. Those measurements are:

1. Analyze opportunity

2. Plan improvement

3. Focus improvement

4. Deliver performance

5. Improve performance

What is Lean Six Sigma?

Lean Six Sigma is a synergized managerial concept of Lean and Six Sigma that results in the elimination of the seven kinds of wastes (classified as Defects, Overproduction, Transportation, Waiting, Inventory, Motion, and over Processing) and provision of goods and service at a rate of 3.4 defects per million opportunities (2).

With the combination of both Lean and Six Sigma, an overall concept is established to delete waste and defects. By combining Lean and Six Sigma, production engineers are able to take the positive aspects from each, and implement a faster more efficient solution. In sum, Lean Six Sigma does not only cut cost but it improves effectiveness to each process developed for an overall product or service. Lean Six Sigma

1. http://en.wikipedia.org/wiki/Carl_Friedrich_Gauss

2.http://en.wikipedia.org/wiki/Lean_Six_Sigma

3.http://www.isixsigma.com/new-to-six-sigma/statistical-six-sigma-definition/

4. http://en.wikipedia.org/wiki/Lean_manufacturing

Chase Wright

Just In Time

March 29, 2012

Just-in-Time (JIT) is a method that reduces the company’s cost and improves workflow by scheduling materials to arrive at a work station or facility, just in time for use. JIT basically focuses on the performance of a company’s activities and its immediate need or demand. (1) In the article, A Review Of The Adoption Of Just-In-Time Method And Its Effect On Efficiency, the authors state that there are four major points that revolve around JIT: 1) The elimination of activities that do not add value to a product or service; 2) a commitment to a high level of quality; 3) a commitment to continuous improvement in the efficiency of an activity; 4) and an emphasis on simplification and increased visibility to identify activities that do not add value. (1, p26)

The Just-in-Time method, also called the JIP philosophy, evolved from the Japanese motor industry in the 1940’s. While the American manufactures were producing and storing as much inventory as possible; the Japanese, most notable Toyota’s Taichi Ohno, begin creating systems that could compete with the American automotive industry without dealing with the cost of long productions. (2) Taichi Ohno created a system called the Toyota Production System (TPS) which eliminated waste in the automobile industry while minimize stock. The two main components of TPS were just-in-time and autonomotion. “Autonomation is the practice of determining the optimal way to perform a given task and then making this the ‘best practice’ standard method.”(3) By 1965, with the implementation of TPS, Toyota begin to see a decrease in their production time and cost; and in the 1980’s the American care manufactures, who had not yet changed their procedures, suddenly realized that they had fallen behind the Japanese in the automotive industry.

The readings for this summary proves that when a manufacturing company implements the JIT philosophy, it wills increases the efficiency of operation, improve quality, increase customer satisfaction, and improve management workers relations which could help the company gain a competitive advantage. (4, p.78) It is also true that JIT can be equally successful if used in the private sector. Although many believe that methods used to increase efficiency and productivity in the manufacturing industry should be different from those used in the private sector; the authors of the article, Benchmarking JIT, prove otherwise.The example of how a hospital, that was previously using 35% of their budget on supplies and inventory, had decided to us the JIT in regards to their ordering procedures saw a 90% reductions in about 18 months proves this point. (4, p. 76) The authors state that benefits of JIT, such as increased organizational efficiency and effectiveness, improve communications internally within an organization, while fostering organizational disciple are improvements that are need in any industry; and using the JIT philosophy would work.(4)

For any industry, in order for JIT to be successful it will take time. Since JIT is often called a philosophy (because of the numerous amount of changes required for implementation) and not a production method, the organization that implements JIT will have to make the decision that they must undergo corporate changes. Since change of any kind for an entire organization is often a slow process, the full implementation of JIT will not happen overnight. It is stated in the article, Benchmarking JIT, that a successful implementation of JIT only happens when the organization’s strategic philosophy changes. Because of the number of things that could possible change when using JIT, from operational and production procedures to customer relations and employee management, leadership is force to realize that not only the company’s methods must change, but the way in which the company operates and how it makes decisions will be affected as well. (3)

By Cheryl Johnson

1) Younies H, Barhem B, Hsu C. Review of the adoption of just-in-time method and its effect on efficiency. Public Administration and Management: An Interactive Journal, 2007 (1), 25 – 27, 35.

2) Petersen, Peter. The misplaced origin of just-in-time production methods. Management Decision; 2002; 40, 1/2; ABI/INFORM Global, pg. 82-84

3) Hopp W, Spearman M. To Pull or Not to Pull: What Is the Question?

Manufacturing & Service Operations Management; Spring 2004; 6, 2; ABI/INFORM Global

pg. 133

4) Yasin M, Wafa M, Small, M. Benchmarking JIT: An analysis of JIT implementations in the manufacturing service and public sectors. Benchmarking; 2004; 11, 1; ABI/INFORM Global; p. 74-78.

Kaizen

March 29, 2012

Foundation of Kaizen

According to Six Sigma, Kaizen is a Japanese term that means continuous improvement, taken from the words “Kai” means continuous and “zen” means improvement. Masaaki Imai published a book in 1986 titled, The key to Japan’s Competitive Success, received a lot of attention from management experts around the world about a term called Kaizen which was used in Japan’s management philosophy. Mr. Imai defined it as the process of gradual and incremental improvement in a pursuit of perfection of business activities. (Smadi) Kaizen stresses on small and continuous improvements to the existing process without a need for major investment cost. Everyone in the organization is involved in the effort and the improvement process with the objective of improving productivity and reducing defects. The goal is to get all of the workers focused on suggesting small improvements that over time lead to big improvements in productivity, quality, safety, waste reduction and leadership. In the book, How To Do Kaizen: A New Path to Innovation by Bunji Tozawa and Norman Bodek, they provide a story about a company that used Kaizen strategies to get the employees to implement 96 ideas per person. The book also heavily discussed and stressed the important success factor was that supervision is critical. They stated that supervision is not only critical but supervisors must listen, praise, and thank employees for their efforts. In a basic concept this is getting people involved in the process of reengineering and improving the adoption rate of change in an organization. Senior management at Dana Corporation viewed the task of asking their employees, “what do you think?” as a very unusual idea in comparison to the American industry. What Dana Corp. found out was that instead of looking for the big idea that would save money, they ended up with lots of small ideas and created an atmosphere of creative thinking. (Bodek, p44)

Practical Use

There are three essential elements to Kaizen. First, the employee who comes up with the idea must be involved in the implementation of the idea. Second, the solution should be defined in a simple way. Ideally it should be only 75 words and include the problem, the solution, and the benefit. Third, share the idea within the company. (Bodek) The automotive manufacture Honda states in its’ company philosophy that respect for the individual is the foundation of their company’s principles. This type of Kaizen thought is also in Toyota’s theme which states that every Toyota team member is empowered with the ability to improve their work environment. These types of statements fuel a Kaizen culture in their organization that embraces change and constant improvement to remain competitive in their industry. In my organization, the concept of draw, see, think and plan, do, check, act is a Kaizen tool used to help with idea creation. A person should visualize or dream of an idea followed by evaluating reality versus the dream. Think of a way to compromise in the middle and create a plan to accomplish the idea. The concept of do it, check it, adjust and act again is the Kaizen way of continuous improvement. There are also several other tools that are used to include visual management or simply put, making problems visible. Another example is the concept of putting quality first and improving performance in a three dimensional view of quality, cost, and delivery.

Kaizen and Business Process Engineering

Kaizen techniques and management principles are useful tools for business process engineering. Organizations that want to create an environment and a company culture of change and reducing waste can use Kaizen philosophies. Process reengineering requires employees to question why and how is a process done. The quick and easy steps of having the employee implement the idea, keep it simple, and share is an easy process of using Kaizen to redesign a business process.

By: Andre Swain

Work Cited:

“Kaizen done better.” Industrial Engineer May 2010: 15. General OneFile. Web. 28 Mar. 2012.

Smadi, Sami Al. “Kaizen strategy and the drive for competitiveness: challenges and opportunities.” Competitiveness Review 19.3 (2009): 203+

Bodek, Norman. “Quick And Easy Kaizen.” IIE Solutions 34.7 (2002): 43. Academic Search Premier. Web. 28 Mar. 2012.