fbpx
Wikipedia

CICS

IBM CICS (Customer Information Control System) is a family of mixed-language application servers that provide online transaction management and connectivity for applications on IBM mainframe systems under z/OS and z/VSE.

CICS
Other namesCustomer Information Control System
Initial releaseJuly 8, 1969; 53 years ago (July 8, 1969)
Stable release
CICS Transaction Server V5.6 / June 12, 2020; 2 years ago (2020-06-12)[1]
Operating systemz/OS, z/VSE
PlatformIBM Z
TypeTeleprocessing monitor
LicenseProprietary
Websitewww.ibm.com/it-infrastructure/z/cics 

CICS family products are designed as middleware and support rapid, high-volume online transaction processing. A CICS transaction is a unit of processing initiated by a single request that may affect one or more objects.[2] This processing is usually interactive (screen-oriented), but background transactions are possible.

CICS Transaction Server (CICS TS) sits at the head of the CICS family and provides services that extend or replace the functions of the operating system. These services can be more efficient than the generalized operating system services and also simpler for programmers to use, particularly with respect to communication with diverse terminal devices.

Applications developed for CICS may be written in a variety of programming languages and use CICS-supplied language extensions to interact with resources such as files, database connections, terminals, or to invoke functions such as web services. CICS manages the entire transaction such that if for any reason a part of the transaction fails all recoverable changes can be backed out.

While CICS TS has its highest profile among large financial institutions, such as banks and insurance companies, many Fortune 500 companies and government entities are reported to run CICS. Other, smaller enterprises can also run CICS TS and other CICS family products. CICS can regularly be found behind the scenes in, for example, bank-teller applications, ATM systems, industrial production control systems, insurance applications, and many other types of interactive applications.

Recent CICS TS enhancements include new capabilities to improve the developer experience, including the choice of APIs, frameworks, editors, and build tools, while at the same time providing updates in the key areas of security, resilience, and management. In earlier, recent CICS TS releases, support was provided for Web services and Java, event processing, Atom feeds, and RESTful interfaces.

History

CICS was preceded by an earlier, single-threaded transaction processing system, IBM MTCS. An 'MTCS-CICS bridge' was later developed to allow these transactions to execute under CICS with no change to the original application programs. IBM's Customer Information Control System (CICS), first developed in conjunction with Michigan Bell in 1966. [3] Ben Riggins was an IBM systems engineer at Virginia Electric Power Co. when he came up with the idea for the online system.[4]

CICS was originally developed in the United States out of the IBM Development Center in Des Plaines, Illinois, beginning in 1966 to address requirements from the public utility industry. The first CICS product was announced in 1968, named Public Utility Customer Information Control System, or PU-CICS. It became clear immediately that it had applicability to many other industries, so the Public Utility prefix was dropped with the introduction of the first release of the CICS Program Product on July 8, 1969, not long after IMS database management system.

For the next few years, CICS was developed in Palo Alto and was considered a less important "smaller" product than IMS which IBM then considered more strategic. Customer pressure kept it alive, however. When IBM decided to end development of CICS in 1974 to concentrate on IMS, the CICS development responsibility was picked up by the IBM Hursley site in the United Kingdom, which had just ceased work on the PL/I compiler and so knew many of the same customers as CICS. The core of the development work continues in Hursley today alongside contributions from labs in India, China, Russia, Australia, and the United States.

Early evolution

CICS originally only supported a few IBM-brand devices like the 1965 IBM 2741 Selectric (golf ball) typewriter-based terminal. The 1964 IBM 2260 and 1972 IBM 3270 video display terminals were widely used later.

In the early days of IBM mainframes, computer software was free – bundled at no extra charge with computer hardware. The OS/360 operating system and application support software like CICS were "open" to IBM customers long before the open-source software initiative. Corporations like Standard Oil of Indiana (Amoco) made major contributions to CICS.

The IBM Des Plaines team tried to add support for popular non-IBM terminals like the ASCII Teletype Model 33 ASR, but the small low-budget software development team could not afford the $100-per-month hardware to test it. IBM executives incorrectly felt that the future would be like the past, with batch processing using traditional punch cards.

IBM reluctantly provided only minimal funding when public utility companies, banks and credit-card companies demanded a cost-effective interactive system (similar to the 1965 IBM Airline Control Program used by the American Airlines Sabre computer reservation system) for high-speed data access-and-update to customer information for their telephone operators (without waiting for overnight batch processing punch card systems).

When CICS was delivered to Amoco with Teletype Model 33 ASR support, it caused the entire OS/360 operating system to crash (including non-CICS application programs). The majority of the CICS Terminal Control Program (TCP – the heart of CICS) and part of OS/360 had to be laboriously redesigned and rewritten by Amoco Production Company in Tulsa Oklahoma. It was then given back to IBM for free distribution to others.

In a few years,[when?] CICS generated over $60 billion in new hardware revenue for IBM, and became their most-successful mainframe software product.

In 1972, CICS was available in three versions – DOS-ENTRY (program number 5736-XX6) for DOS/360 machines with very limited memory, DOS-STANDARD (program number 5736-XX7), for DOS/360 machines with more memory, and OS-STANDARD V2 (program number 5734-XX7) for the larger machines which ran OS/360.[5]

In early 1970, a number of the original developers, including Ben Riggins (the principal architect of the early releases) relocated to California and continued CICS development at IBM's Palo Alto Development Center. IBM executives did not recognize value in software as a revenue-generating product until after federal law required software unbundling. In 1980, IBM executives failed to heed Ben Riggins' strong suggestions that IBM should provide their own EBCDIC-based operating system and integrated-circuit microprocessor chip for use in the IBM Personal Computer as a CICS intelligent terminal (instead of the incompatible Intel chip, and immature ASCII-based Microsoft 1980 DOS).

Because of the limited capacity of even large processors of that era every CICS installation was required to assemble the source code for all of the CICS system modules after completing a process similar to system generation (sysgen), called CICSGEN, to establish values for conditional assembly-language statements. This process allowed each customer to exclude support from CICS itself for any feature they did not intend to use, such as device support for terminal types not in use.

CICS owes its early popularity to its relatively efficient implementation when hardware was very expensive, its multi-threaded processing architecture, its relative simplicity for developing terminal-based real-time transaction applications, and many open-source customer contributions, including both debugging and feature enhancement.

Z notation

Part of CICS was formalized using the Z notation in the 1980s and 1990s in collaboration with the Oxford University Computing Laboratory, under the leadership of Tony Hoare. This work won a Queen's Award for Technological Achievement.[6]

CICS as a distributed file server

In 1986, IBM announced CICS support for the record-oriented file services defined by Distributed Data Management Architecture (DDM). This enabled programs on remote, network-connected computers to create, manage, and access files that had previously been available only within the CICS/MVS and CICS/VSE transaction processing environments.[7]

In newer versions of CICS, support for DDM has been removed. Support for the DDM component of CICS z/OS was discontinued at the end of 2003, and was removed from CICS for z/OS in version 5.2 onward.[8] In CICS TS for z/VSE, support for DDM was stabilised at V1.1.1 level, with an announced intention to discontinue it in a future release.[9] In CICS for z/VSE 2.1 onward, CICS/DDM is not supported.[10]

CICS and the World Wide Web

CICS Transaction Server first introduced a native HTTP interface in version 1.2, together with a Web Bridge technology for wrapping green-screen terminal-based programs with an HTML facade. CICS Web and Document APIs were enhanced in CICS TS V1.3 to enable web-aware applications to be written to interact more effectively with web browsers.

CICS TS versions 2.1 through 2.3 focused on introducing CORBA and EJB technologies to CICS, offering new ways to integrate CICS assets into distributed application component models. These technologies relied on hosting Java applications in CICS. The Java hosting environment saw numerous improvements over many releases, ultimately resulting in the embedding of the WebSphere Liberty Profile into CICS Transaction Server V5.1. Numerous web facing technologies could be hosted in CICS using Java, this ultimately resulted in the removal of the native CORBA and EJB technologies.

CICS TS V3.1 added a native implementation of the SOAP and WSDL technologies for CICS, together with client side HTTP APIs for outbound communication. These twin technologies enabled easier integration of CICS components with other Enterprise applications, and saw widespread adoption. Tools were included for taking traditional CICS programs written in languages such as COBOL, and converting them into WSDL defined Web Services, with little or no program changes. This technology saw regular enhancements over successive releases of CICS.

CICS TS V4.1 and V4.2 saw further enhancements to web connectivity, including a native implementation of the Atom publishing protocol.

Many of the newer web facing technologies were made available for earlier releases of CICS using delivery models other than a traditional product release. This allowed early adopters to provide constructive feedback that could influence the final design of the integrated technology. Examples include the Soap for CICS technology preview SupportPac for TS V2.2, or the ATOM SupportPac for TS V3.1. This approach was used to introduce JSON support for CICS TS V4.2, a technology that went on to be integrated into CICS TS V5.2.

The JSON technology in CICS is similar to earlier SOAP technology, both of which allowed programs hosted in CICS to be wrapped with a modern facade. The JSON technology was in turn enhanced in z/OS Connect Enterprise Edition, an IBM product for composing JSON APIs that can leverage assets from several mainframe subsystems.

Many partner products have also been used to interact with CICS. Popular examples include using the CICS Transaction Gateway for connecting to CICS from JCA compliant Java application servers, and IBM DataPower appliances for filtering web traffic before it reaches CICS.

Modern versions of CICS provide many ways for both existing and new software assets to be integrated into distributed application flows. CICS assets can be accessed from remote systems, and can access remote systems; user identity and transactional context can be propagated; RESTful APIs can be composed and managed; devices, users and servers can interact with CICS using standards-based technologies; and the IBM WebSphere Liberty environment in CICS promotes the rapid adoption of new technologies.

MicroCICS

By January, 1985 a 1969-founded consulting company, having done "massive on-line systems" for Hilton Hotels, FTD Florists, Amtrak, and Budget Rent-a-Car, announced what became MicroCICS.[11] The initial focus was the IBM XT/370 and IBM AT/370.[12]

CICS Family

Although when CICS is mentioned, people usually mean CICS Transaction Server, the CICS Family refers to a portfolio of transaction servers, connectors (called CICS Transaction Gateway) and CICS Tools.

CICS on distributed platforms—not mainframes—is called IBM TXSeries. TXSeries is distributed transaction processing middleware. It supports C, C++, COBOL, Java™ and PL/I applications in cloud environments and traditional data centers. TXSeries is available on AIX, Linux x86, Windows, Solaris, and HP-UX platforms.[13] CICS is also available on other operating systems, notably IBM i and OS/2. The z/OS implementation (i.e., CICS Transaction Server for z/OS) is by far the most popular and significant.

Two versions of CICS were previously available for VM/CMS, but both have since been discontinued. In 1986, IBM released CICS/CMS,[14][11] which was a single-user version of CICS designed for development use, the applications later being transferred to an MVS or DOS/VS system for production execution.[15][16] Later, in 1988, IBM released CICS/VM.[17][18] CICS/VM was intended for use on the IBM 9370, a low-end mainframe targeted at departmental use; IBM positioned CICS/VM running on departmental or branch office mainframes for use in conjunction with a central mainframe running CICS for MVS.[19]

CICS Tools

Provisioning, management and analysis of CICS systems and applications is provided by CICS Tools. This includes performance management as well as deployment and management of CICS resources. In 2015, the four core foundational CICS tools (and the CICS Optimization Solution Pack for z/OS) were updated with the release of CICS Transaction Server for z/OS 5.3. The four core CICS Tools: CICS Interdependency Analyzer for z/OS, CICS Deployment Assistant for z/OS, CICS Performance Analyzer for z/OS and CICS Configuration Manager for z/OS.

Releases and versions

CICS Transaction Server for z/OS has used the following release numbers:

Version Announcement Date Release Date End of Service Date Features
Old version, no longer maintained: CICS Transaction Server for OS/390 1.1 1996-09-10[20] 1996-11-08 2001-12-31
Old version, no longer maintained: CICS Transaction Server for OS/390 1.2 1997-09-09[21] 1997-10-24 2002-12-31
Old version, no longer maintained: CICS Transaction Server for OS/390 1.3 1998-09-08[22] 1999-03-26 2006-04-30
Old version, no longer maintained: CICS Transaction Server for z/OS 2.1 2001-03-13[23] 2001-03-30 2002-06-30
Old version, no longer maintained: CICS Transaction Server for z/OS 2.2 2001-12-04[24] 2002-01-25 2008-04-30
Old version, no longer maintained: CICS Transaction Server for z/OS 2.3 2003-10-28[25] 2003-12-19 2009-09-30
Old version, no longer maintained: CICS Transaction Server for z/OS 3.1 2004-11-30[26] 2005-03-25 2015-12-31
Old version, no longer maintained: CICS Transaction Server for z/OS 3.2 2007-03-27[27] 2007-06-29 2015-12-31
Old version, no longer maintained: CICS Transaction Server for z/OS 4.1 2009-04-28[28] 2009-06-26 2017-09-30
Old version, no longer maintained: CICS Transaction Server for z/OS 4.2 2011-04-05[29] 2011-06-24 2018-09-30
Old version, no longer maintained: CICS Transaction Server for z/OS 5.1 2012-10-03[30] 2012-12-14 2019-07-01
Old version, no longer maintained: CICS Transaction Server for z/OS 5.2 2014-04-07[31] 2014-06-13 2020-12-31
Old version, no longer maintained: CICS Transaction Server for z/OS 5.3 2015-10-05[32] 2015-12-11 2021-12-31
Older version, yet still maintained: CICS Transaction Server for z/OS 5.4 2017-05-16[33] 2017-06-16 2023-12-31
Older version, yet still maintained: CICS Transaction Server for z/OS 5.5 2018-10-02[34] 2018-12-14
Older version, yet still maintained: CICS Transaction Server for z/OS 5.6 2020-04-07[35] 2020-06-12 Support for Spring Boot, Jakarta EE 8, Node.js 12. New JCICSX API with remote development capability. Security, resilience and management enhancements.
Current stable version: CICS Transaction Server for z/OS 6.1 2022-04-05[36] 2022-06-17 Support for Java 11, Jakarta EE 9.1, Eclipse MicroProfile 5, Node.js 12, TLS 1.3. Security enhancements and simplifications. Region tagging.
Legend:
Old version
Older version, still maintained
Latest version
Latest preview version
Future release

Programming

Programming considerations

Multiple-user interactive-transaction application programs were required to be quasi-reentrant in order to support multiple concurrent transaction threads. A software coding error in one application could block all users from the system. The modular design of CICS reentrant / reusable control programs meant that, with judicious "pruning," multiple users with multiple applications could be executed on a computer with just 32K of expensive magnetic core physical memory (including the operating system).

Considerable effort was required by CICS application programmers to make their transactions as efficient as possible. A common technique was to limit the size of individual programs to no more than 4,096 bytes, or 4K, so that CICS could easily reuse the memory occupied by any program not currently in use for another program or other application storage needs. When virtual memory was added to versions OS/360 in 1972, the 4K strategy became even more important to reduce paging and thrashing unproductive resource-contention overhead.

The efficiency of compiled high-level COBOL and PL/I language programs left much to be desired. Many CICS application programs continued to be written in assembler language, even after COBOL and PL/I support became available.

With 1960s-and-1970s hardware resources expensive and scarce, a competitive "game" developed among system optimization analysts. When critical path code was identified, a code snippet was passed around from one analyst to another. Each person had to either (a) reduce the number of bytes of code required, or (b) reduce the number of CPU cycles required. Younger analysts learned from what more-experienced mentors did. Eventually, when no one could do (a) or (b), the code was considered optimized, and they moved on to other snippets. Small shops with only one analyst learned CICS optimization very slowly (or not at all).

Because application programs could be shared by many concurrent threads, the use of static variables embedded within a program (or use of operating system memory) was restricted (by convention only).

Unfortunately, many of the "rules" were frequently broken, especially by COBOL programmers who might not understand the internals of their programs or fail to use the necessary restrictive compile time options. This resulted in "non-re-entrant" code that was often unreliable, leading to spurious storage violations and entire CICS system crashes.

Originally, the entire partition, or Multiple Virtual Storage (MVS) region, operated with the same memory protection key including the CICS kernel code. Program corruption and CICS control block corruption was a frequent cause of system downtime. A software error in one application program could overwrite the memory (code or data) of one or all currently running application transactions. Locating the offending application code for complex transient timing errors could be a very-difficult operating-system analyst problem.

These shortcomings persisted for multiple new releases of CICS over a period of more than 20 years, in spite of their severity and the fact that top-quality CICS skills were in high demand and short supply. They were addressed in TS V3.3, V4.1 and V5.2 with the Storage Protection, Transaction Isolation and Subspace features respectively, which utilize operating system hardware features to protect the application code and the data within the same address space even though the applications were not written to be separated. CICS application transactions remain mission-critical for many public utility companies, large banks and other multibillion-dollar financial institutions.

Additionally, it is possible to provide a measure of advance application protection by performing test under control of a monitoring program that also serves to provide Test and Debug features.

Macro-level programming

When CICS was first released, it only supported application transaction programs written in IBM 360 Assembler. COBOL and PL/I support were added years later. Because of the initial assembler orientation, requests for CICS services were made using assembler-language macros. For example, the request to read a record from a file were made by a macro call to the "File Control Program" of CICS might look like this:

DFHFC TYPE=READ,DATASET=myfile,TYPOPER=UPDATE,....etc. 

This gave rise to the later terminology "Macro-level CICS."

When high-level language support was added, the macros were retained and the code was converted by a pre-compiler that expanded the macros to their COBOL or PL/I CALL statement equivalents. Thus preparing a HLL application was effectively a "two-stage" compile – output from the preprocessor fed into the HLL compiler as input.

COBOL considerations: unlike PL/I, IBM COBOL does not normally provide for the manipulation of pointers (addresses). In order to allow COBOL programmers to access CICS control blocks and dynamic storage the designers resorted to what was essentially a hack. The COBOL Linkage Section was normally used for inter-program communication, such as parameter passing. The compiler generates a list of addresses, each called a Base Locator for Linkage (BLL) which were set on entry to the called program. The first BLL corresponds to the first item in the Linkage Section and so on. CICS allows the programmer to access and manipulate these by passing the address of the list as the first argument to the program. The BLLs can then be dynamically set, either by CICS or by the application to allow access to the corresponding structure in the Linkage Section.[37]

Command-level programming

During the 1980s, IBM at Hursley Park produced a version of CICS that supported what became known as "Command-level CICS" which still supported the older programs but introduced a new API style to application programs.

A typical Command-level call might look like the following:

 EXEC CICS  SEND MAPSET('LOSMATT') MAP('LOSATT')  END-EXEC 

The values given in the SEND MAPSET command correspond to the names used on the first DFHMSD macro in the map definition given below for the MAPSET argument, and the DFHMSI macro for the MAP argument. This is pre-processed by a pre-compile batch translation stage, which converts the embedded commands (EXECs) into call statements to a stub subroutine. So, preparing application programs for later execution still required two stages. It was possible to write "Mixed mode" applications using both Macro-level and Command-level statements.

Initially, at execution time, the command-level commands were converted using a run-time translator, "The EXEC Interface Program", to the old Macro-level call, which was then executed by the mostly unchanged CICS nucleus programs. But when the CICS Kernel was re-written for TS V3, EXEC CICS became the only way to program CICS applications, as many of the underlying interfaces had changed.

Run-time conversion

The Command-level-only CICS introduced in the early 1990s offered some advantages over earlier versions of CICS. However, IBM also dropped support for Macro-level application programs written for earlier versions. This meant that many application programs had to be converted or completely rewritten to use Command-level EXEC commands only.

By this time, there were perhaps millions of programs worldwide that had been in production for decades in many cases. Rewriting them often introduced new bugs without necessarily adding new features. There were a significant number of users who ran CICS V2 application-owning regions (AORs) to continue to run macro code for many years after the change to V3.

It was also possible to execute old Macro-level programs using conversion software such as APT International's Command CICS.[38]

New programming styles

Recent CICS Transaction Server enhancements include support for a number of modern programming styles.

CICS Transaction Server Version 5.6[39] introduced enhanced support for Java to deliver a cloud-native experience for Java developers. For example, the new CICS Java API (JCICSX) allows easier unit testing using mocking and stubbing approaches, and can be run remotely on the developer’s local workstation. A set of CICS artifacts on Maven Central enable developers to resolve Java dependencies using popular dependency management tools such as Apache Maven and Gradle. Plug-ins for Maven (cics-bundle-maven) and Gradle (cics-bundle-gradle) are also provided to simplify automated building of CICS bundles, using familiar IDEs like Eclipse, IntelliJ IDEA, and Visual Studio Code. In addition, Node.js z/OS support is enhanced for version 12, providing faster startup, better default heap limits, updates to the V8 JavaScript engine, etc. Support for Jakarta EE 8 is also included.

CICS TS 5.5 introduced support for IBM SDK for Node.js, providing a full JavaScript runtime, server-side APIs, and libraries to efficiently build high-performance, highly scalable network applications for IBM Z.

CICS Transaction Server Version 2.1 introduced support for Java. CICS Transaction Server Version 2.2 supported the Software Developers Toolkit. CICS provides the same run-time container as IBM's WebSphere product family so Java EE applications are portable between CICS and Websphere and there is common tooling for the development and deployment of Java EE applications.

In addition, CICS placed an emphasis on "wrapping" existing application programs inside modern interfaces so that long-established business functions can be incorporated into more modern services. These include WSDL, SOAP and JSON interfaces that wrap legacy code so that a web or mobile application can obtain and update the core business objects without requiring a major rewrite of the back-end functions.

Transactions

A CICS transaction is a set of operations that perform a task together. Usually, the majority of transactions are relatively simple tasks such as requesting an inventory list or entering a debit or credit to an account. A primary characteristic of a transaction is that it should be atomic. On IBM Z servers, CICS easily supports thousands of transactions per second, making it a mainstay of enterprise computing.

CICS applications comprise transactions, which can be written in numerous programming languages, including COBOL, PL/I, C, C++, IBM Basic Assembly Language, Rexx, and Java.

Each CICS program is initiated using a transaction identifier. CICS screens are usually sent as a construct called a map, a module created with Basic Mapping Support (BMS) assembler macros or third-party tools. CICS screens may contain text that is highlighted, has different colors, and/or blinks depending on the terminal type used. An example of how a map can be sent through COBOL is given below. The end user inputs data, which is made accessible to the program by receiving a map from CICS.

 EXEC CICS  RECEIVE MAPSET('LOSMATT') MAP('LOSATT') INTO(OUR-MAP)  END-EXEC. 

For technical reasons, the arguments to some command parameters must be quoted and some must not be quoted, depending on what is being referenced. Most programmers will code out of a reference book until they get the "hang" or concept of which arguments are quoted, or they'll typically use a "canned template" where they have example code that they just copy and paste, then edit to change the values.

Example of BMS Map Code

Basic Mapping Support defines the screen format through assembler macros such as the following. This was assembled to generate both the physical map set – a load module in a CICS load library – and a symbolic map set – a structure definition or DSECT in PL/I, COBOL, assembler, etc. which was copied into the source program.[40]

 LOSMATT DFHMSD TYPE=MAP, X  MODE=INOUT, X  TIOAPFX=YES, X  TERM=3270-2, X  LANG=COBOL, X  MAPATTS=(COLOR,HILIGHT), X  DSATTS=(COLOR,HILIGHT), X  STORAGE=AUTO, X  CTRL=(FREEKB,FRSET)  *  LOSATT DFHMDI SIZE=(24,80), X  LINE=1, X  COLUMN=1  *  LSSTDII DFHMDF POS=(1,01), X  LENGTH=04, X  COLOR=BLUE, X  INITIAL='MQCM', X  ATTRB=PROT  *  DFHMDF POS=(24,01), X  LENGTH=79, X  COLOR=BLUE X  ATTRB=ASKIP, X  INITIAL='PF7- 8- 9- 10- X  11- 12-CANCEL'  *  DFHMSD TYPE=FINAL   END 

Structure

In the z/OS environment, a CICS installation comprises one or more "regions" (generally referred to as a "CICS Region"),[41] spread across one or more z/OS system images. Although it processes interactive transactions, each CICS region is usually started as a batch processing|batch address space with standard JCL statements: it's a job that runs indefinitely until shutdown. Alternatively, each CICS region may be started as a started task. Whether a batch job or a started task, CICS regions may run for days, weeks, or even months before shutting down for maintenance (MVS or CICS). Upon restart a parameter determines if the start should be "Cold" (no recovery) or "Warm"/"Emergency" (using a warm shutdown or restarting from the log after a crash). Cold starts of large CICS regions with many resources can take a long time as all the definitions are re-processed.

Installations are divided into multiple address spaces for a wide variety of reasons, such as:

  • application separation,
  • function separation,
  • avoiding the workload capacity limitations of a single region, or address space, or mainframe instance in the case of a z/OS SysPlex.

A typical installation consists of a number of distinct applications that make up a service. Each service usually has a number of "Terminal-Owning Region" (TORs) that route transactions to multiple "Application-Owning Regions" (AORs), though other topologies are possible. For example, the AORs might not perform File I/O. Instead there would be a "File-Owning Region" (FOR) that performed the File I/O on behalf of transactions in the AOR – given that, at the time, a VSAM file could only support recoverable write access from one address space at a time.

But not all CICS applications use VSAM as the primary data source (or historically other single address space at a time datastores such as CA Datacom)- many use either IMS/DB or Db2 as the database, and/or MQ as a queue manager. For all these cases, TORs can load-balance transactions to sets of AORs which then directly use the shared databases/queues. CICS supports XA two-phase commit between data stores and so transactions that spanned MQ, VSAM/RLS and Db2, for example, are possible with ACID properties.

CICS supports distributed transactions using SNA LU6.2 protocol between the address spaces which can be running on the same or different clusters. This allows ACID updates of multiple datastores by cooperating distributed applications. In practice there are issues with this if a system or communications failure occurs because the transaction disposition (backout or commit) may be in-doubt if one of the communicating nodes has not recovered. Thus the use of these facilities has never been very widespread.

Sysplex exploitation

At the time of CICS ESA V3.2, in the early 1990s, IBM faced the challenge of how to get CICS to exploit the new zOS Sysplex mainframe line.

The Sysplex was to be based on CMOS (Complementary Metal Oxide Silicon) rather than the existing ECL (Emitter Coupled Logic) hardware. The cost of scaling the mainframe-unique ECL was much higher than CMOS which was being developed by a keiretsu with high-volume use cases such as Sony PlayStation to reduce the unit cost of each generation's CPUs. The ECL was also expensive for users to run because the gate drain current produced so much heat that the CPU had to packaged into a special module called a Thermal Conduction Module (TCM[42]) that had inert gas pistons and needed plumbed to be high-volume chilled water to be cooled. But the air-cooled CMOS technology's CPU speed initially was much slower than the ECL (notably the boxes available from the mainframe-clone makers Amdahl and Hitachi). This was especially concerning to IBM in the CICS context as almost all the largest mainframe customers were running CICS and for many of them it was the primary mainframe workload.

To achieve the same total transaction throughput on a Sysplex multiple boxes would need to be used in parallel for each workload but a CICS address space, due to its semi-reentrant application programming model, could not exploit more than about 1.5 processors on one box at the time – even with use of MVS sub-tasks. Without this, these customers would tend to move to the competitors rather than Sysplex as they scaled up the CICS workloads. There was considerable debate inside IBM as to whether the right approach would be to break upward compatibility for applications and move to a model like IMS/DC which was fully reentrant, or to extend the approach customers had adopted to more fully exploit a single mainframe's power – using multi-region operation (MRO).

Eventually the second path was adopted after the CICS user community was consulted and vehemently opposed breaking upward compatibility given that they had the prospect of Y2K to contend with at that time and did not see the value in re-writing and testing millions of lines of mainly COBOL, PL/1, or assembler code.

The IBM recommended structure for CICS on Sysplex was that at least one CICS Terminal Owning Region was placed on each Sysplex node which dispatched transactions to many Application Owning Regions (AORs) spread across the entire Sysplex. If these applications needed to access shared resources they either used a Sysplex-exploiting datastore (such as IBM Db2 or IMS/DB) or concentrated, by function-shipping, the resource requests into singular-per-resource Resource Owing Regions (RORs) including File Owning Regions (FORs) for VSAM and CICS Data Tables, Queue Owning Regions (QORs) for MQ, CICS Transient Data (TD) and CICS Temporary Storage (TS). This preserved compatibility for legacy applications at the expense of operational complexity to configure and manage many CICS regions.

In subsequent releases and versions, CICS was able to exploit new Sysplex-exploiting facilities in VSAM/RLS,[43] MQ for zOS[44] and placed its own Data Tables, TD, and TS resources into the architected shared resource manager for the Sysplex -> the Coupling Facility or CF, dispensing with the need for most RORs. The CF provides a mapped view of resources including a shared timebase, buffer pools, locks and counters with hardware messaging assists that made sharing resources across the Sysplex both more efficient than polling and reliable (utilizing a semi-synchronized backup CF for use in case of failure).

By this time, the CMOS line had individual boxes that exceeded the power available by the fastest ECL box with more processors per CPU and when these were coupled together 32 or more nodes would be able to scale two orders of magnitude larger in total power for a single workload. For example, by 2002, Charles Schwab was running a "MetroPlex" consisting of a redundant pair of its mainframe Sysplexes in two locations in Phoenix, AZ each with 32 nodes driven by one shared CICS/DB/2 workload to support the vast volume of pre-dotcom-bubble web client inquiry requests.

This cheaper, much more scalable CMOS technology base, and the huge investment costs of having to both get to 64bit addressing and independently produce cloned CF functionality drove the IBM-mainframe clone makers out of the business one by one.[45][46]

CICS Recovery/Restart

The objective of recovery/restart in CICS is to minimize and if possible eliminate damage done to Online System when a failure occurs, so that system and data integrity is maintained.[47] If the CICS region was shutdown instead of failing it will perform a "Warm" start exploiting the checkpoint written at shutdown. The CICS region can also be forced to "Cold" start which reloads all definitions and wipes out the log, leaving the resources in whatever state they are in.

Under CICS, following are some of the resources which are considered recoverable. If one wishes these resources to be recoverable then special options must be specified in relevant CICS definitions:

  • VSAM files
  • CMT CICS-maintained data tables
  • Intrapartition TDQ
  • Temporary Storage Queue in auxiliary storage
  • I/O messages from/to transactions in a VTAM network
  • Other database/queuing resources connected to CICS that support XA two-phase commit protocol (like IMS/DB, Db2, VSAM/RLS)

CICS also offers extensive recovery/restart facilities for users to establish their own recovery/restart capability in their CICS system. Commonly used recovery/restart facilities include:

  • Dynamic Transaction Backout (DTB)
  • Automatic Transaction Restart
  • Resource Recovery using System Log
  • Resource Recovery using Journal
  • System Restart
  • Extended Recovery Facility

Components

Each CICS region comprises one major task on which every transaction runs, although certain services such as access to IBM Db2 data use other tasks (TCBs). Within a region, transactions are cooperatively multitasked – they are expected to be well-behaved and yield the CPU rather than wait. CICS services handle this automatically.

Each unique CICS "Task" or transaction is allocated its own dynamic memory at start-up and subsequent requests for additional memory were handled by a call to the "Storage Control program" (part of the CICS nucleus or "kernel"), which is analogous to an operating system.

A CICS system consists of the online nucleus, batch support programs, and applications services.[48]

Nucleus

The original CICS nucleus consisted of a number of functional modules written in 370 assembler until V3:

  • Task Control Program (KCP)
  • Storage Control Program (SCP)
  • Program Control Program (PCP)
  • Program Interrupt Control Program (PIP)
  • Interval Control Program (ICP)
  • Dump Control Program (DCP)
  • Terminal Control Program (TCP)
  • File Control Program (FCP)
  • Transient Data Control Program (TDP)
  • Temporary Storage Control Program (TSP)

Starting in V3, the CICS nucleus was rewritten into a kernel-and-domain structure using IBM's PL/AS language – which is compiled into assembler.

The prior structure did not enforce separation of concerns and so had many inter-program dependencies which led to bugs unless exhaustive code analysis was done. The new structure was more modular and so resilient because it was easier to change without impact. The first domains were often built with the name of the prior program but without the trailing "P". For example, Program Control Domain (DFHPC) or Transient Data Domain (DFHTD). The kernel operated as a switcher for inter-domain requests – initially this proved expensive for frequently called domains (such as Trace) but by utilizing PL/AS macros these calls were in-lined without compromising on the separate domain design.

In later versions, completely redesigned domains were added like the Logging Domain DFHLG and Transaction Domain DFHTM that replaced the Journal Control Program (JCP).

Support programs

In addition to the online functions CICS has several support programs that run as batch jobs.[49] : pp.34–35 

  • High-level language (macro) preprocessor
  • Command language translator
  • Dump utility – prints formatted dumps generated by CICS Dump Management
  • Trace utility – formats and prints CICS trace output
  • Journal formatting utility – prints a formatted dump of the CICS region in case of error

Applications services

The following components of CICS support application development.[49]: pp.35–37 

  • Basic Mapping Support (BMS) provides device-independent terminal input and output
  • APPC Support that provides LU6.1 and LU6.2 API support for collaborating distributed applications that support two-phase commit
  • Data Interchange Program (DIP) provides support for IBM 3770 and IBM 3790 programmable devices
  • 2260 Compatibility allows programs written for IBM 2260 display devices to run on 3270 displays
  • EXEC Interface Program – the stub program that converts calls generated by EXEC CICS commands to calls to CICS functions
  • Built-in Functions – table search, phonetic conversion, field verify, field edit, bit checking, input formatting, weighted retrieval

Pronunciation

Different countries have differing pronunciations[50]

  • Within IBM (specifically Tivoli) it is referred to as /ˈkɪks/.
  • In the US, it is more usually pronounced by reciting each letter /ˌsˌˌsˈɛs/.
  • In Australia, Belgium, Canada, Hong Kong, the UK and some other countries, it is pronounced /ˈkɪks/.
  • In Denmark, it is pronounced kicks.
  • In Finland, it is pronounced [kiks]
  • In France, it is pronounced [se.i.se.ɛs].
  • In Germany, Austria and Hungary, it is pronounced [ˈtsɪks] and, less often, [ˈkɪks].
  • In Greece, it is pronounced kiks.
  • In India, it is pronounced kicks.
  • In Iran, it is pronounced kicks.
  • In Israel , it is pronounced C-I-C-S.
  • In Italy, is pronounced [ˈtʃiks].
  • In Poland, it is pronounced [ˈkʲiks].
  • In Portugal and Brazil, it is pronounced [ˈsiks].
  • In Russia, it is pronounced kiks.
  • In Slovenia, it is pronounced kiks.
  • In Spain, it is pronounced [ˈθiks].
  • In Sweden, it is pronounced kicks.
  • In Israel, it is pronounced kicks.
  • In Uganda, it is pronounced kicks.
  • In Turkey, it is pronounced kiks.

See also

References

  1. ^ "IBM CICS Transaction Server for z/OS, V5.6 delivers significant improvements to the developer experience, security, resilience, and management". IBM. 7 April 2020. Retrieved 28 July 2021.
  2. ^ IBM Corporation. "CICS Transaction Server for z/OS Glossary:T". IBM. from the original on 15 June 2021. Retrieved 2 February 2021.
  3. ^ "IBM archives". IBM. 23 January 2003. Retrieved 6 December 2022.
  4. ^ "ESM Mainframe hall of fame". ESM. Retrieved 6 December 2022.
  5. ^ Customer Information Control System (CICS) General Information Manual (PDF). White Plains, New York: IBM. December 1972. GH20-1028-3. (PDF) from the original on 29 May 2019. Retrieved 1 April 2016.
  6. ^ King, Steve (1993). "The Use of Z in the Restructure of IBM CICS". In Hayes, Ian (ed.). Specification Case Studies (2nd ed.). New York: Prentice Hall. pp. 202–213. ISBN 978-0-13-832544-2.
  7. ^ Warner, Edward (23 February 1987). "IBM Gives PC Programs Direct Mainframe Access: PC Applications Can Alter Files". InfoWorld. 9 (8): 1. from the original on 24 December 2016. Retrieved 1 April 2016.
  8. ^ "IBM CICS Transaction Server for z/OS, V5.2 takes service agility, operational efficiency, and cloud enablement to a new level". IBM. 7 April 2014. from the original on 15 June 2021. Retrieved 14 April 2016. CICS DDM is no longer available from IBM and support was discontinued, as of December 31, 2003. CICS DDM is no longer available in CICS TS from Version 5.2 onwards.
  9. ^ "IBM z/VSE Central Functions Version 9.2 - z/VSE Version 5.2". IBM. 7 April 2014. from the original on 24 March 2016. Retrieved 14 April 2016. Support for CICS Distributed Data Management (DDM) is stabilized in CICS TS for VSE/ESA V1.1.1. In a future release of CICS TS for z/VSE, IBM intends to discontinue support for CICS DDM.
  10. ^ "IBM CICS Transaction Server for z/VSE V2.1 delivers enhancements for future workloads". IBM. 5 October 2015. from the original on 24 April 2016. Retrieved 14 April 2016. CICS Distributed Data Management (CICS/DDM) is not supported with CICS TS for z/VSE V2.1.
  11. ^ a b Paul E. Schindler, Jr. (27 October 1986). "Unicorn is Betting that CICS is easer and cheaper on a PC". InformationWeek. pp. 41–44.
  12. ^ "Unicorn MicroCICS/RT". Computerworld. 9 December 1985. p. 98. IBM Personal Computer XT/370 family
  13. ^ "IBM Get its CICS". Midrange Systems. 10 November 1992. p. 35.
  14. ^ "announced .. October of 1985 .. didn't start deliveries until July of this year."
  15. ^ "CICS/CMS". IBM. from the original on 2 April 2016. Retrieved 1 April 2016.
  16. ^ "CUSTOMER INFORMATION CONTROL SYSTEM/ CONVERSATIONAL MONITOR SYSTEM (CICS/CMS) RELEASE 1 ANNOUNCED AND PLANNED TO BE AVAILABLE JUNE 1986". IBM. 15 October 1985. from the original on 2 April 2016. Retrieved 2 April 2016.
  17. ^ "(CICS/VM) Customer Information Control System / Virtual Machine". IBM. from the original on 13 April 2016. Retrieved 1 April 2016.
  18. ^ "CUSTOMER INFORMATION CONTROL SYSTEM/VIRTUAL MACHINE (CICS/VM)". IBM. 20 October 1987. from the original on 2 April 2016. Retrieved 2 April 2016.
  19. ^ Babcock, Charles (2 November 1987). "VM/SP update eases migration". Computerworld. Vol. 21, no. 44. IDG Enterprise. pp. 25, 31. ISSN 0010-4841. from the original on 31 March 2017. Retrieved 30 March 2017.
  20. ^ "US - IBM CICS Transaction Server (CICS TS) for OS/390". www.ibm.com. 3 February 2004. Retrieved 7 May 2022.
  21. ^ "US - IBM CICS Transaction Server (CICS TS) for OS/390". www.ibm.com. 3 February 2004. Retrieved 7 May 2022.
  22. ^ "US - IBM CICS Transaction Server (CICS TS) for OS/390". www.ibm.com. 3 February 2004. Retrieved 7 May 2022.
  23. ^ "CICS TS for z/OS V2". www.ibm.com. 23 May 2001. Retrieved 13 May 2022.
  24. ^ "IBM CICS Transaction Server for z/OS V2.2 Delivers Major Value to All CICS Customers". www.ibm.com. 4 December 2001. Retrieved 7 May 2022.
  25. ^ "IBM CICS Transaction Server for z/OS V2.3 advances towards on demand business". www.ibm.com. 28 October 2003. Retrieved 7 May 2022.
  26. ^ "IBM CICS Transaction Server for z/OS V3.1 offers improved integration, application transformation". www.ibm.com. 30 November 2004. Retrieved 7 May 2022.
  27. ^ "CICS Transaction Server for z/OS V3.2 delivers significant innovation for application connectivity". www.ibm.com. 27 March 2007. Retrieved 7 May 2022.
  28. ^ "IBM US Announcement Letter". www.ibm.com. 28 April 2009. Retrieved 7 May 2022.
  29. ^ "IBM US Announcement Letter". www.ibm.com. 5 April 2011. Retrieved 7 May 2022.
  30. ^ "IBM CICS Transaction Server for z/OS V5.1 delivers operational efficiency and service agility with cloud enablement". www.ibm.com. 3 October 2012. Retrieved 7 May 2022.
  31. ^ "IBM CICS Transaction Server for z/OS, V5.2 takes service agility, operational efficiency, and cloud enablement to a new level". www.ibm.com. 7 April 2014. Retrieved 7 May 2022.
  32. ^ "IBM CICS Transaction Server for z/OS, V5.3 delivers advances in service agility, operational efficiency, and cloud enablement with DevOps". www.ibm.com. 5 October 2015. Retrieved 7 May 2022.
  33. ^ "IBM CICS Transaction Server for z/OS, V5.4 delivers unparalleled mixed language application serving". www.ibm.com. 16 May 2017. Retrieved 7 May 2022.
  34. ^ "IBM CICS Transaction Server for z/OS, V5.5 delivers support for Node.js and further enhancements to CICS Explorer, systems management, and security". www.ibm.com. 2 October 2018. Retrieved 7 May 2022.
  35. ^ "IBM CICS Transaction Server for z/OS, V5.6 delivers significant improvements to the developer experience, security, resilience, and management". www.ibm.com. 7 April 2020. Retrieved 6 May 2022.
  36. ^ "IBM CICS Transaction Server for z/OS, 6.1 delivers significant improvements in the areas of developer productivity, security, and management". www.ibm.com. 5 April 2022. Retrieved 6 May 2022.
  37. ^ IBM Corporation (1972). Customer Information Control System (CICS) Application Programmer's Reference Manual (PDF). (PDF) from the original on 29 May 2019. Retrieved 4 January 2016.
  38. ^ "Command/CICS". IBM. from the original on 15 June 2021. Retrieved 22 April 2018.
  39. ^ "IBM CICS Transaction Server for z/OS, V5.6 delivers significant improvements to the developer experience, security, resilience, and management". 7 April 2020. from the original on 10 July 2020. Retrieved 9 July 2020.
  40. ^ IBM Corporation. "Basic mapping support". CICS Information Center. Archived from the original on 3 January 2013.
  41. ^ IBM (13 September 2010). . CICS Transaction Server for z/OS V3.2. IBM Information Center, Boulder, Colorado. Archived from the original on 1 September 2013. Retrieved 12 December 2010.
  42. ^ "IBM Archives: Thermal conduction module". www-03.ibm.com. 23 January 2003. from the original on 20 July 2016. Retrieved 1 June 2018.
  43. ^ "IMS Context". IMS. Chichester, UK: John Wiley & Sons, Ltd. 2009. pp. 1–39. doi:10.1002/9780470750001.ch1. ISBN 9780470750001.
  44. ^ "IBM Knowledge Center MQ for zOS". www.ibm.com. 11 March 2014. from the original on 7 August 2016. Retrieved 1 June 2018.
  45. ^ Vijayan, Jaikumar. "Amdahl gives up on mainframe business". Computerworld. from the original on 3 November 2018. Retrieved 1 June 2018.
  46. ^ "Hitachi exits mainframe hardware but will collab with IBM on z Systems". from the original on 13 June 2018. Retrieved 1 June 2018.
  47. ^ "IBM Knowledge Center". publib.boulder.ibm.com. from the original on 15 June 2021. Retrieved 2 February 2021.
  48. ^ IBM Corporation (1975). Customer Information Control System (CICS) System Programmer's Reference Manual (PDF). (PDF) from the original on 17 February 2011. Retrieved 21 November 2012.
  49. ^ a b IBM Corporation (1977). (PDF). Archived from the original (PDF) on 17 February 2011. Retrieved 24 November 2012.
  50. ^ "CICS - An Introduction" (PDF). IBM Corporation. 8 July 2004. Retrieved 20 April 2014.

External links

  • Official website
  • Why to choose CICS Transaction Server for new IT projects – IBM CICS whitepaper
  • at the Wayback Machine (archived February 4, 2009)
  • Support Forum for CICS Programming
  • CICS User Community website for CICS related news, announcements and discussions
  • at the Wayback Machine (archived February 5, 2005)

cics, other, uses, disambiguation, customer, information, control, system, family, mixed, language, application, servers, that, provide, online, transaction, management, connectivity, applications, mainframe, systems, under, other, namescustomer, information, . For other uses see CICS disambiguation IBM CICS Customer Information Control System is a family of mixed language application servers that provide online transaction management and connectivity for applications on IBM mainframe systems under z OS and z VSE CICSOther namesCustomer Information Control SystemInitial releaseJuly 8 1969 53 years ago July 8 1969 Stable releaseCICS Transaction Server V5 6 June 12 2020 2 years ago 2020 06 12 1 Operating systemz OS z VSEPlatformIBM ZTypeTeleprocessing monitorLicenseProprietaryWebsitewww wbr ibm wbr com wbr it infrastructure wbr z wbr cics CICS family products are designed as middleware and support rapid high volume online transaction processing A CICS transaction is a unit of processing initiated by a single request that may affect one or more objects 2 This processing is usually interactive screen oriented but background transactions are possible CICS Transaction Server CICS TS sits at the head of the CICS family and provides services that extend or replace the functions of the operating system These services can be more efficient than the generalized operating system services and also simpler for programmers to use particularly with respect to communication with diverse terminal devices Applications developed for CICS may be written in a variety of programming languages and use CICS supplied language extensions to interact with resources such as files database connections terminals or to invoke functions such as web services CICS manages the entire transaction such that if for any reason a part of the transaction fails all recoverable changes can be backed out While CICS TS has its highest profile among large financial institutions such as banks and insurance companies many Fortune 500 companies and government entities are reported to run CICS Other smaller enterprises can also run CICS TS and other CICS family products CICS can regularly be found behind the scenes in for example bank teller applications ATM systems industrial production control systems insurance applications and many other types of interactive applications Recent CICS TS enhancements include new capabilities to improve the developer experience including the choice of APIs frameworks editors and build tools while at the same time providing updates in the key areas of security resilience and management In earlier recent CICS TS releases support was provided for Web services and Java event processing Atom feeds and RESTful interfaces Contents 1 History 1 1 Early evolution 1 2 Z notation 1 3 CICS as a distributed file server 1 4 CICS and the World Wide Web 1 5 MicroCICS 1 6 CICS Family 1 7 CICS Tools 1 8 Releases and versions 2 Programming 2 1 Programming considerations 2 2 Macro level programming 2 3 Command level programming 2 4 Run time conversion 2 5 New programming styles 3 Transactions 3 1 Example of BMS Map Code 4 Structure 5 Sysplex exploitation 6 CICS Recovery Restart 7 Components 7 1 Nucleus 7 2 Support programs 7 3 Applications services 8 Pronunciation 9 See also 10 References 11 External linksHistory EditCICS was preceded by an earlier single threaded transaction processing system IBM MTCS An MTCS CICS bridge was later developed to allow these transactions to execute under CICS with no change to the original application programs IBM s Customer Information Control System CICS first developed in conjunction with Michigan Bell in 1966 3 Ben Riggins was an IBM systems engineer at Virginia Electric Power Co when he came up with the idea for the online system 4 CICS was originally developed in the United States out of the IBM Development Center in Des Plaines Illinois beginning in 1966 to address requirements from the public utility industry The first CICS product was announced in 1968 named Public Utility Customer Information Control System or PU CICS It became clear immediately that it had applicability to many other industries so the Public Utility prefix was dropped with the introduction of the first release of the CICS Program Product on July 8 1969 not long after IMS database management system For the next few years CICS was developed in Palo Alto and was considered a less important smaller product than IMS which IBM then considered more strategic Customer pressure kept it alive however When IBM decided to end development of CICS in 1974 to concentrate on IMS the CICS development responsibility was picked up by the IBM Hursley site in the United Kingdom which had just ceased work on the PL I compiler and so knew many of the same customers as CICS The core of the development work continues in Hursley today alongside contributions from labs in India China Russia Australia and the United States Early evolution Edit CICS originally only supported a few IBM brand devices like the 1965 IBM 2741 Selectric golf ball typewriter based terminal The 1964 IBM 2260 and 1972 IBM 3270 video display terminals were widely used later In the early days of IBM mainframes computer software was free bundled at no extra charge with computer hardware The OS 360 operating system and application support software like CICS were open to IBM customers long before the open source software initiative Corporations like Standard Oil of Indiana Amoco made major contributions to CICS The IBM Des Plaines team tried to add support for popular non IBM terminals like the ASCII Teletype Model 33 ASR but the small low budget software development team could not afford the 100 per month hardware to test it IBM executives incorrectly felt that the future would be like the past with batch processing using traditional punch cards IBM reluctantly provided only minimal funding when public utility companies banks and credit card companies demanded a cost effective interactive system similar to the 1965 IBM Airline Control Program used by the American Airlines Sabre computer reservation system for high speed data access and update to customer information for their telephone operators without waiting for overnight batch processing punch card systems When CICS was delivered to Amoco with Teletype Model 33 ASR support it caused the entire OS 360 operating system to crash including non CICS application programs The majority of the CICS Terminal Control Program TCP the heart of CICS and part of OS 360 had to be laboriously redesigned and rewritten by Amoco Production Company in Tulsa Oklahoma It was then given back to IBM for free distribution to others In a few years when CICS generated over 60 billion in new hardware revenue for IBM and became their most successful mainframe software product In 1972 CICS was available in three versions DOS ENTRY program number 5736 XX6 for DOS 360 machines with very limited memory DOS STANDARD program number 5736 XX7 for DOS 360 machines with more memory and OS STANDARD V2 program number 5734 XX7 for the larger machines which ran OS 360 5 In early 1970 a number of the original developers including Ben Riggins the principal architect of the early releases relocated to California and continued CICS development at IBM s Palo Alto Development Center IBM executives did not recognize value in software as a revenue generating product until after federal law required software unbundling In 1980 IBM executives failed to heed Ben Riggins strong suggestions that IBM should provide their own EBCDIC based operating system and integrated circuit microprocessor chip for use in the IBM Personal Computer as a CICS intelligent terminal instead of the incompatible Intel chip and immature ASCII based Microsoft 1980 DOS Because of the limited capacity of even large processors of that era every CICS installation was required to assemble the source code for all of the CICS system modules after completing a process similar to system generation sysgen called CICSGEN to establish values for conditional assembly language statements This process allowed each customer to exclude support from CICS itself for any feature they did not intend to use such as device support for terminal types not in use CICS owes its early popularity to its relatively efficient implementation when hardware was very expensive its multi threaded processing architecture its relative simplicity for developing terminal based real time transaction applications and many open source customer contributions including both debugging and feature enhancement Z notation Edit Part of CICS was formalized using the Z notation in the 1980s and 1990s in collaboration with the Oxford University Computing Laboratory under the leadership of Tony Hoare This work won a Queen s Award for Technological Achievement 6 CICS as a distributed file server Edit In 1986 IBM announced CICS support for the record oriented file services defined by Distributed Data Management Architecture DDM This enabled programs on remote network connected computers to create manage and access files that had previously been available only within the CICS MVS and CICS VSE transaction processing environments 7 In newer versions of CICS support for DDM has been removed Support for the DDM component of CICS z OS was discontinued at the end of 2003 and was removed from CICS for z OS in version 5 2 onward 8 In CICS TS for z VSE support for DDM was stabilised at V1 1 1 level with an announced intention to discontinue it in a future release 9 In CICS for z VSE 2 1 onward CICS DDM is not supported 10 CICS and the World Wide Web Edit CICS Transaction Server first introduced a native HTTP interface in version 1 2 together with a Web Bridge technology for wrapping green screen terminal based programs with an HTML facade CICS Web and Document APIs were enhanced in CICS TS V1 3 to enable web aware applications to be written to interact more effectively with web browsers CICS TS versions 2 1 through 2 3 focused on introducing CORBA and EJB technologies to CICS offering new ways to integrate CICS assets into distributed application component models These technologies relied on hosting Java applications in CICS The Java hosting environment saw numerous improvements over many releases ultimately resulting in the embedding of the WebSphere Liberty Profile into CICS Transaction Server V5 1 Numerous web facing technologies could be hosted in CICS using Java this ultimately resulted in the removal of the native CORBA and EJB technologies CICS TS V3 1 added a native implementation of the SOAP and WSDL technologies for CICS together with client side HTTP APIs for outbound communication These twin technologies enabled easier integration of CICS components with other Enterprise applications and saw widespread adoption Tools were included for taking traditional CICS programs written in languages such as COBOL and converting them into WSDL defined Web Services with little or no program changes This technology saw regular enhancements over successive releases of CICS CICS TS V4 1 and V4 2 saw further enhancements to web connectivity including a native implementation of the Atom publishing protocol Many of the newer web facing technologies were made available for earlier releases of CICS using delivery models other than a traditional product release This allowed early adopters to provide constructive feedback that could influence the final design of the integrated technology Examples include the Soap for CICS technology preview SupportPac for TS V2 2 or the ATOM SupportPac for TS V3 1 This approach was used to introduce JSON support for CICS TS V4 2 a technology that went on to be integrated into CICS TS V5 2 The JSON technology in CICS is similar to earlier SOAP technology both of which allowed programs hosted in CICS to be wrapped with a modern facade The JSON technology was in turn enhanced in z OS Connect Enterprise Edition an IBM product for composing JSON APIs that can leverage assets from several mainframe subsystems Many partner products have also been used to interact with CICS Popular examples include using the CICS Transaction Gateway for connecting to CICS from JCA compliant Java application servers and IBM DataPower appliances for filtering web traffic before it reaches CICS Modern versions of CICS provide many ways for both existing and new software assets to be integrated into distributed application flows CICS assets can be accessed from remote systems and can access remote systems user identity and transactional context can be propagated RESTful APIs can be composed and managed devices users and servers can interact with CICS using standards based technologies and the IBM WebSphere Liberty environment in CICS promotes the rapid adoption of new technologies MicroCICS Edit By January 1985 a 1969 founded consulting company having done massive on line systems for Hilton Hotels FTD Florists Amtrak and Budget Rent a Car announced what became MicroCICS 11 The initial focus was the IBM XT 370 and IBM AT 370 12 CICS Family Edit Although when CICS is mentioned people usually mean CICS Transaction Server the CICS Family refers to a portfolio of transaction servers connectors called CICS Transaction Gateway and CICS Tools CICS on distributed platforms not mainframes is called IBM TXSeries TXSeries is distributed transaction processing middleware It supports C C COBOL Java and PL I applications in cloud environments and traditional data centers TXSeries is available on AIX Linux x86 Windows Solaris and HP UX platforms 13 CICS is also available on other operating systems notably IBM i and OS 2 The z OS implementation i e CICS Transaction Server for z OS is by far the most popular and significant Two versions of CICS were previously available for VM CMS but both have since been discontinued In 1986 IBM released CICS CMS 14 11 which was a single user version of CICS designed for development use the applications later being transferred to an MVS or DOS VS system for production execution 15 16 Later in 1988 IBM released CICS VM 17 18 CICS VM was intended for use on the IBM 9370 a low end mainframe targeted at departmental use IBM positioned CICS VM running on departmental or branch office mainframes for use in conjunction with a central mainframe running CICS for MVS 19 CICS Tools Edit Provisioning management and analysis of CICS systems and applications is provided by CICS Tools This includes performance management as well as deployment and management of CICS resources In 2015 the four core foundational CICS tools and the CICS Optimization Solution Pack for z OS were updated with the release of CICS Transaction Server for z OS 5 3 The four core CICS Tools CICS Interdependency Analyzer for z OS CICS Deployment Assistant for z OS CICS Performance Analyzer for z OS and CICS Configuration Manager for z OS Releases and versions Edit CICS Transaction Server for z OS has used the following release numbers Version Announcement Date Release Date End of Service Date FeaturesOld version no longer maintained CICS Transaction Server for OS 390 1 1 1996 09 10 20 1996 11 08 2001 12 31Old version no longer maintained CICS Transaction Server for OS 390 1 2 1997 09 09 21 1997 10 24 2002 12 31Old version no longer maintained CICS Transaction Server for OS 390 1 3 1998 09 08 22 1999 03 26 2006 04 30Old version no longer maintained CICS Transaction Server for z OS 2 1 2001 03 13 23 2001 03 30 2002 06 30Old version no longer maintained CICS Transaction Server for z OS 2 2 2001 12 04 24 2002 01 25 2008 04 30Old version no longer maintained CICS Transaction Server for z OS 2 3 2003 10 28 25 2003 12 19 2009 09 30Old version no longer maintained CICS Transaction Server for z OS 3 1 2004 11 30 26 2005 03 25 2015 12 31Old version no longer maintained CICS Transaction Server for z OS 3 2 2007 03 27 27 2007 06 29 2015 12 31Old version no longer maintained CICS Transaction Server for z OS 4 1 2009 04 28 28 2009 06 26 2017 09 30Old version no longer maintained CICS Transaction Server for z OS 4 2 2011 04 05 29 2011 06 24 2018 09 30Old version no longer maintained CICS Transaction Server for z OS 5 1 2012 10 03 30 2012 12 14 2019 07 01Old version no longer maintained CICS Transaction Server for z OS 5 2 2014 04 07 31 2014 06 13 2020 12 31Old version no longer maintained CICS Transaction Server for z OS 5 3 2015 10 05 32 2015 12 11 2021 12 31Older version yet still maintained CICS Transaction Server for z OS 5 4 2017 05 16 33 2017 06 16 2023 12 31Older version yet still maintained CICS Transaction Server for z OS 5 5 2018 10 02 34 2018 12 14Older version yet still maintained CICS Transaction Server for z OS 5 6 2020 04 07 35 2020 06 12 Support for Spring Boot Jakarta EE 8 Node js 12 New JCICSX API with remote development capability Security resilience and management enhancements Current stable version CICS Transaction Server for z OS 6 1 2022 04 05 36 2022 06 17 Support for Java 11 Jakarta EE 9 1 Eclipse MicroProfile 5 Node js 12 TLS 1 3 Security enhancements and simplifications Region tagging Legend Old versionOlder version still maintainedLatest versionLatest preview versionFuture releaseThis table is incomplete you can help by expanding it Programming EditProgramming considerations Edit Multiple user interactive transaction application programs were required to be quasi reentrant in order to support multiple concurrent transaction threads A software coding error in one application could block all users from the system The modular design of CICS reentrant reusable control programs meant that with judicious pruning multiple users with multiple applications could be executed on a computer with just 32K of expensive magnetic core physical memory including the operating system Considerable effort was required by CICS application programmers to make their transactions as efficient as possible A common technique was to limit the size of individual programs to no more than 4 096 bytes or 4K so that CICS could easily reuse the memory occupied by any program not currently in use for another program or other application storage needs When virtual memory was added to versions OS 360 in 1972 the 4K strategy became even more important to reduce paging and thrashing unproductive resource contention overhead The efficiency of compiled high level COBOL and PL I language programs left much to be desired Many CICS application programs continued to be written in assembler language even after COBOL and PL I support became available With 1960s and 1970s hardware resources expensive and scarce a competitive game developed among system optimization analysts When critical path code was identified a code snippet was passed around from one analyst to another Each person had to either a reduce the number of bytes of code required or b reduce the number of CPU cycles required Younger analysts learned from what more experienced mentors did Eventually when no one could do a or b the code was considered optimized and they moved on to other snippets Small shops with only one analyst learned CICS optimization very slowly or not at all Because application programs could be shared by many concurrent threads the use of static variables embedded within a program or use of operating system memory was restricted by convention only Unfortunately many of the rules were frequently broken especially by COBOL programmers who might not understand the internals of their programs or fail to use the necessary restrictive compile time options This resulted in non re entrant code that was often unreliable leading to spurious storage violations and entire CICS system crashes Originally the entire partition or Multiple Virtual Storage MVS region operated with the same memory protection key including the CICS kernel code Program corruption and CICS control block corruption was a frequent cause of system downtime A software error in one application program could overwrite the memory code or data of one or all currently running application transactions Locating the offending application code for complex transient timing errors could be a very difficult operating system analyst problem These shortcomings persisted for multiple new releases of CICS over a period of more than 20 years in spite of their severity and the fact that top quality CICS skills were in high demand and short supply They were addressed in TS V3 3 V4 1 and V5 2 with the Storage Protection Transaction Isolation and Subspace features respectively which utilize operating system hardware features to protect the application code and the data within the same address space even though the applications were not written to be separated CICS application transactions remain mission critical for many public utility companies large banks and other multibillion dollar financial institutions Additionally it is possible to provide a measure of advance application protection by performing test under control of a monitoring program that also serves to provide Test and Debug features Macro level programming Edit When CICS was first released it only supported application transaction programs written in IBM 360 Assembler COBOL and PL I support were added years later Because of the initial assembler orientation requests for CICS services were made using assembler language macros For example the request to read a record from a file were made by a macro call to the File Control Program of CICS might look like this DFHFC TYPE READ DATASET myfile TYPOPER UPDATE etc This gave rise to the later terminology Macro level CICS When high level language support was added the macros were retained and the code was converted by a pre compiler that expanded the macros to their COBOL or PL I CALL statement equivalents Thus preparing a HLL application was effectively a two stage compile output from the preprocessor fed into the HLL compiler as input COBOL considerations unlike PL I IBM COBOL does not normally provide for the manipulation of pointers addresses In order to allow COBOL programmers to access CICS control blocks and dynamic storage the designers resorted to what was essentially a hack The COBOL Linkage Section was normally used for inter program communication such as parameter passing The compiler generates a list of addresses each called a Base Locator for Linkage BLL which were set on entry to the called program The first BLL corresponds to the first item in the Linkage Section and so on CICS allows the programmer to access and manipulate these by passing the address of the list as the first argument to the program The BLLs can then be dynamically set either by CICS or by the application to allow access to the corresponding structure in the Linkage Section 37 Command level programming Edit During the 1980s IBM at Hursley Park produced a version of CICS that supported what became known as Command level CICS which still supported the older programs but introduced a new API style to application programs A typical Command level call might look like the following EXEC CICS SEND MAPSET LOSMATT MAP LOSATT END EXEC The values given in the SEND MAPSET command correspond to the names used on the first DFHMSD macro in the map definition given below for the MAPSET argument and the DFHMSI macro for the MAP argument This is pre processed by a pre compile batch translation stage which converts the embedded commands EXECs into call statements to a stub subroutine So preparing application programs for later execution still required two stages It was possible to write Mixed mode applications using both Macro level and Command level statements Initially at execution time the command level commands were converted using a run time translator The EXEC Interface Program to the old Macro level call which was then executed by the mostly unchanged CICS nucleus programs But when the CICS Kernel was re written for TS V3 EXEC CICS became the only way to program CICS applications as many of the underlying interfaces had changed Run time conversion Edit The Command level only CICS introduced in the early 1990s offered some advantages over earlier versions of CICS However IBM also dropped support for Macro level application programs written for earlier versions This meant that many application programs had to be converted or completely rewritten to use Command level EXEC commands only By this time there were perhaps millions of programs worldwide that had been in production for decades in many cases Rewriting them often introduced new bugs without necessarily adding new features There were a significant number of users who ran CICS V2 application owning regions AORs to continue to run macro code for many years after the change to V3 It was also possible to execute old Macro level programs using conversion software such as APT International s Command CICS 38 New programming styles Edit Recent CICS Transaction Server enhancements include support for a number of modern programming styles CICS Transaction Server Version 5 6 39 introduced enhanced support for Java to deliver a cloud native experience for Java developers For example the new CICS Java API JCICSX allows easier unit testing using mocking and stubbing approaches and can be run remotely on the developer s local workstation A set of CICS artifacts on Maven Central enable developers to resolve Java dependencies using popular dependency management tools such as Apache Maven and Gradle Plug ins for Maven cics bundle maven and Gradle cics bundle gradle are also provided to simplify automated building of CICS bundles using familiar IDEs like Eclipse IntelliJ IDEA and Visual Studio Code In addition Node js z OS support is enhanced for version 12 providing faster startup better default heap limits updates to the V8 JavaScript engine etc Support for Jakarta EE 8 is also included CICS TS 5 5 introduced support for IBM SDK for Node js providing a full JavaScript runtime server side APIs and libraries to efficiently build high performance highly scalable network applications for IBM Z CICS Transaction Server Version 2 1 introduced support for Java CICS Transaction Server Version 2 2 supported the Software Developers Toolkit CICS provides the same run time container as IBM s WebSphere product family so Java EE applications are portable between CICS and Websphere and there is common tooling for the development and deployment of Java EE applications In addition CICS placed an emphasis on wrapping existing application programs inside modern interfaces so that long established business functions can be incorporated into more modern services These include WSDL SOAP and JSON interfaces that wrap legacy code so that a web or mobile application can obtain and update the core business objects without requiring a major rewrite of the back end functions Transactions EditA CICS transaction is a set of operations that perform a task together Usually the majority of transactions are relatively simple tasks such as requesting an inventory list or entering a debit or credit to an account A primary characteristic of a transaction is that it should be atomic On IBM Z servers CICS easily supports thousands of transactions per second making it a mainstay of enterprise computing CICS applications comprise transactions which can be written in numerous programming languages including COBOL PL I C C IBM Basic Assembly Language Rexx and Java Each CICS program is initiated using a transaction identifier CICS screens are usually sent as a construct called a map a module created with Basic Mapping Support BMS assembler macros or third party tools CICS screens may contain text that is highlighted has different colors and or blinks depending on the terminal type used An example of how a map can be sent through COBOL is given below The end user inputs data which is made accessible to the program by receiving a map from CICS EXEC CICS RECEIVE MAPSET LOSMATT MAP LOSATT INTO OUR MAP END EXEC For technical reasons the arguments to some command parameters must be quoted and some must not be quoted depending on what is being referenced Most programmers will code out of a reference book until they get the hang or concept of which arguments are quoted or they ll typically use a canned template where they have example code that they just copy and paste then edit to change the values Example of BMS Map Code Edit Basic Mapping Support defines the screen format through assembler macros such as the following This was assembled to generate both the physical map set a load module in a CICS load library and a symbolic map set a structure definition or DSECT in PL I COBOL assembler etc which was copied into the source program 40 LOSMATT DFHMSD TYPE MAP X MODE INOUT X TIOAPFX YES X TERM 3270 2 X LANG COBOL X MAPATTS COLOR HILIGHT X DSATTS COLOR HILIGHT X STORAGE AUTO X CTRL FREEKB FRSET LOSATT DFHMDI SIZE 24 80 X LINE 1 X COLUMN 1 LSSTDII DFHMDF POS 1 01 X LENGTH 04 X COLOR BLUE X INITIAL MQCM X ATTRB PROT DFHMDF POS 24 01 X LENGTH 79 X COLOR BLUE X ATTRB ASKIP X INITIAL PF7 8 9 10 X 11 12 CANCEL DFHMSD TYPE FINAL ENDStructure EditIn the z OS environment a CICS installation comprises one or more regions generally referred to as a CICS Region 41 spread across one or more z OS system images Although it processes interactive transactions each CICS region is usually started as a batch processing batch address space with standard JCL statements it s a job that runs indefinitely until shutdown Alternatively each CICS region may be started as a started task Whether a batch job or a started task CICS regions may run for days weeks or even months before shutting down for maintenance MVS or CICS Upon restart a parameter determines if the start should be Cold no recovery or Warm Emergency using a warm shutdown or restarting from the log after a crash Cold starts of large CICS regions with many resources can take a long time as all the definitions are re processed Installations are divided into multiple address spaces for a wide variety of reasons such as application separation function separation avoiding the workload capacity limitations of a single region or address space or mainframe instance in the case of a z OS SysPlex A typical installation consists of a number of distinct applications that make up a service Each service usually has a number of Terminal Owning Region TORs that route transactions to multiple Application Owning Regions AORs though other topologies are possible For example the AORs might not perform File I O Instead there would be a File Owning Region FOR that performed the File I O on behalf of transactions in the AOR given that at the time a VSAM file could only support recoverable write access from one address space at a time But not all CICS applications use VSAM as the primary data source or historically other single address space at a time datastores such as CA Datacom many use either IMS DB or Db2 as the database and or MQ as a queue manager For all these cases TORs can load balance transactions to sets of AORs which then directly use the shared databases queues CICS supports XA two phase commit between data stores and so transactions that spanned MQ VSAM RLS and Db2 for example are possible with ACID properties CICS supports distributed transactions using SNA LU6 2 protocol between the address spaces which can be running on the same or different clusters This allows ACID updates of multiple datastores by cooperating distributed applications In practice there are issues with this if a system or communications failure occurs because the transaction disposition backout or commit may be in doubt if one of the communicating nodes has not recovered Thus the use of these facilities has never been very widespread Sysplex exploitation EditAt the time of CICS ESA V3 2 in the early 1990s IBM faced the challenge of how to get CICS to exploit the new zOS Sysplex mainframe line The Sysplex was to be based on CMOS Complementary Metal Oxide Silicon rather than the existing ECL Emitter Coupled Logic hardware The cost of scaling the mainframe unique ECL was much higher than CMOS which was being developed by a keiretsu with high volume use cases such as Sony PlayStation to reduce the unit cost of each generation s CPUs The ECL was also expensive for users to run because the gate drain current produced so much heat that the CPU had to packaged into a special module called a Thermal Conduction Module TCM 42 that had inert gas pistons and needed plumbed to be high volume chilled water to be cooled But the air cooled CMOS technology s CPU speed initially was much slower than the ECL notably the boxes available from the mainframe clone makers Amdahl and Hitachi This was especially concerning to IBM in the CICS context as almost all the largest mainframe customers were running CICS and for many of them it was the primary mainframe workload To achieve the same total transaction throughput on a Sysplex multiple boxes would need to be used in parallel for each workload but a CICS address space due to its semi reentrant application programming model could not exploit more than about 1 5 processors on one box at the time even with use of MVS sub tasks Without this these customers would tend to move to the competitors rather than Sysplex as they scaled up the CICS workloads There was considerable debate inside IBM as to whether the right approach would be to break upward compatibility for applications and move to a model like IMS DC which was fully reentrant or to extend the approach customers had adopted to more fully exploit a single mainframe s power using multi region operation MRO Eventually the second path was adopted after the CICS user community was consulted and vehemently opposed breaking upward compatibility given that they had the prospect of Y2K to contend with at that time and did not see the value in re writing and testing millions of lines of mainly COBOL PL 1 or assembler code The IBM recommended structure for CICS on Sysplex was that at least one CICS Terminal Owning Region was placed on each Sysplex node which dispatched transactions to many Application Owning Regions AORs spread across the entire Sysplex If these applications needed to access shared resources they either used a Sysplex exploiting datastore such as IBM Db2 or IMS DB or concentrated by function shipping the resource requests into singular per resource Resource Owing Regions RORs including File Owning Regions FORs for VSAM and CICS Data Tables Queue Owning Regions QORs for MQ CICS Transient Data TD and CICS Temporary Storage TS This preserved compatibility for legacy applications at the expense of operational complexity to configure and manage many CICS regions In subsequent releases and versions CICS was able to exploit new Sysplex exploiting facilities in VSAM RLS 43 MQ for zOS 44 and placed its own Data Tables TD and TS resources into the architected shared resource manager for the Sysplex gt the Coupling Facility or CF dispensing with the need for most RORs The CF provides a mapped view of resources including a shared timebase buffer pools locks and counters with hardware messaging assists that made sharing resources across the Sysplex both more efficient than polling and reliable utilizing a semi synchronized backup CF for use in case of failure By this time the CMOS line had individual boxes that exceeded the power available by the fastest ECL box with more processors per CPU and when these were coupled together 32 or more nodes would be able to scale two orders of magnitude larger in total power for a single workload For example by 2002 Charles Schwab was running a MetroPlex consisting of a redundant pair of its mainframe Sysplexes in two locations in Phoenix AZ each with 32 nodes driven by one shared CICS DB 2 workload to support the vast volume of pre dotcom bubble web client inquiry requests This cheaper much more scalable CMOS technology base and the huge investment costs of having to both get to 64bit addressing and independently produce cloned CF functionality drove the IBM mainframe clone makers out of the business one by one 45 46 CICS Recovery Restart EditThe objective of recovery restart in CICS is to minimize and if possible eliminate damage done to Online System when a failure occurs so that system and data integrity is maintained 47 If the CICS region was shutdown instead of failing it will perform a Warm start exploiting the checkpoint written at shutdown The CICS region can also be forced to Cold start which reloads all definitions and wipes out the log leaving the resources in whatever state they are in Under CICS following are some of the resources which are considered recoverable If one wishes these resources to be recoverable then special options must be specified in relevant CICS definitions VSAM files CMT CICS maintained data tables Intrapartition TDQ Temporary Storage Queue in auxiliary storage I O messages from to transactions in a VTAM network Other database queuing resources connected to CICS that support XA two phase commit protocol like IMS DB Db2 VSAM RLS CICS also offers extensive recovery restart facilities for users to establish their own recovery restart capability in their CICS system Commonly used recovery restart facilities include Dynamic Transaction Backout DTB Automatic Transaction Restart Resource Recovery using System Log Resource Recovery using Journal System Restart Extended Recovery FacilityComponents EditEach CICS region comprises one major task on which every transaction runs although certain services such as access to IBM Db2 data use other tasks TCBs Within a region transactions are cooperatively multitasked they are expected to be well behaved and yield the CPU rather than wait CICS services handle this automatically Each unique CICS Task or transaction is allocated its own dynamic memory at start up and subsequent requests for additional memory were handled by a call to the Storage Control program part of the CICS nucleus or kernel which is analogous to an operating system A CICS system consists of the online nucleus batch support programs and applications services 48 Nucleus Edit The original CICS nucleus consisted of a number of functional modules written in 370 assembler until V3 Task Control Program KCP Storage Control Program SCP Program Control Program PCP Program Interrupt Control Program PIP Interval Control Program ICP Dump Control Program DCP Terminal Control Program TCP File Control Program FCP Transient Data Control Program TDP Temporary Storage Control Program TSP Starting in V3 the CICS nucleus was rewritten into a kernel and domain structure using IBM s PL AS language which is compiled into assembler The prior structure did not enforce separation of concerns and so had many inter program dependencies which led to bugs unless exhaustive code analysis was done The new structure was more modular and so resilient because it was easier to change without impact The first domains were often built with the name of the prior program but without the trailing P For example Program Control Domain DFHPC or Transient Data Domain DFHTD The kernel operated as a switcher for inter domain requests initially this proved expensive for frequently called domains such as Trace but by utilizing PL AS macros these calls were in lined without compromising on the separate domain design In later versions completely redesigned domains were added like the Logging Domain DFHLG and Transaction Domain DFHTM that replaced the Journal Control Program JCP Support programs Edit In addition to the online functions CICS has several support programs that run as batch jobs 49 pp 34 35 High level language macro preprocessor Command language translator Dump utility prints formatted dumps generated by CICS Dump Management Trace utility formats and prints CICS trace output Journal formatting utility prints a formatted dump of the CICS region in case of errorApplications services Edit The following components of CICS support application development 49 pp 35 37 Basic Mapping Support BMS provides device independent terminal input and output APPC Support that provides LU6 1 and LU6 2 API support for collaborating distributed applications that support two phase commit Data Interchange Program DIP provides support for IBM 3770 and IBM 3790 programmable devices 2260 Compatibility allows programs written for IBM 2260 display devices to run on 3270 displays EXEC Interface Program the stub program that converts calls generated by EXEC CICS commands to calls to CICS functions Built in Functions table search phonetic conversion field verify field edit bit checking input formatting weighted retrievalPronunciation EditDifferent countries have differing pronunciations 50 Within IBM specifically Tivoli it is referred to as ˈ k ɪ k s In the US it is more usually pronounced by reciting each letter ˌ s iː ˌ aɪ ˌ s iː ˈ ɛ s In Australia Belgium Canada Hong Kong the UK and some other countries it is pronounced ˈ k ɪ k s In Denmark it is pronounced kicks In Finland it is pronounced kiks In France it is pronounced se i se ɛs In Germany Austria and Hungary it is pronounced ˈtsɪks and less often ˈkɪks In Greece it is pronounced kiks In India it is pronounced kicks In Iran it is pronounced kicks In Israel it is pronounced C I C S In Italy is pronounced ˈtʃiks In Poland it is pronounced ˈkʲiks In Portugal and Brazil it is pronounced ˈsiks In Russia it is pronounced kiks In Slovenia it is pronounced kiks In Spain it is pronounced ˈ8iks In Sweden it is pronounced kicks In Israel it is pronounced kicks In Uganda it is pronounced kicks In Turkey it is pronounced kiks See also EditIBM TXSeries CICS on distributed platforms IBM WebSphere IBM 2741 IBM 2260 IBM 3270 OS 360 and successors Transaction Processing Facility Virtual Storage Access Method VSAM References Edit IBM CICS Transaction Server for z OS V5 6 delivers significant improvements to the developer experience security resilience and management IBM 7 April 2020 Retrieved 28 July 2021 IBM Corporation CICS Transaction Server for z OS Glossary T IBM Archived from the original on 15 June 2021 Retrieved 2 February 2021 IBM archives IBM 23 January 2003 Retrieved 6 December 2022 ESM Mainframe hall of fame ESM Retrieved 6 December 2022 Customer Information Control System CICS General Information Manual PDF White Plains New York IBM December 1972 GH20 1028 3 Archived PDF from the original on 29 May 2019 Retrieved 1 April 2016 King Steve 1993 The Use of Z in the Restructure of IBM CICS In Hayes Ian ed Specification Case Studies 2nd ed New York Prentice Hall pp 202 213 ISBN 978 0 13 832544 2 Warner Edward 23 February 1987 IBM Gives PC Programs Direct Mainframe Access PC Applications Can Alter Files InfoWorld 9 8 1 Archived from the original on 24 December 2016 Retrieved 1 April 2016 IBM CICS Transaction Server for z OS V5 2 takes service agility operational efficiency and cloud enablement to a new level IBM 7 April 2014 Archived from the original on 15 June 2021 Retrieved 14 April 2016 CICS DDM is no longer available from IBM and support was discontinued as of December 31 2003 CICS DDM is no longer available in CICS TS from Version 5 2 onwards IBM z VSE Central Functions Version 9 2 z VSE Version 5 2 IBM 7 April 2014 Archived from the original on 24 March 2016 Retrieved 14 April 2016 Support for CICS Distributed Data Management DDM is stabilized in CICS TS for VSE ESA V1 1 1 In a future release of CICS TS for z VSE IBM intends to discontinue support for CICS DDM IBM CICS Transaction Server for z VSE V2 1 delivers enhancements for future workloads IBM 5 October 2015 Archived from the original on 24 April 2016 Retrieved 14 April 2016 CICS Distributed Data Management CICS DDM is not supported with CICS TS for z VSE V2 1 a b Paul E Schindler Jr 27 October 1986 Unicorn is Betting that CICS is easer and cheaper on a PC InformationWeek pp 41 44 Unicorn MicroCICS RT Computerworld 9 December 1985 p 98 IBM Personal Computer XT 370 family IBM Get its CICS Midrange Systems 10 November 1992 p 35 announced October of 1985 didn t start deliveries until July of this year CICS CMS IBM Archived from the original on 2 April 2016 Retrieved 1 April 2016 CUSTOMER INFORMATION CONTROL SYSTEM CONVERSATIONAL MONITOR SYSTEM CICS CMS RELEASE 1 ANNOUNCED AND PLANNED TO BE AVAILABLE JUNE 1986 IBM 15 October 1985 Archived from the original on 2 April 2016 Retrieved 2 April 2016 CICS VM Customer Information Control System Virtual Machine IBM Archived from the original on 13 April 2016 Retrieved 1 April 2016 CUSTOMER INFORMATION CONTROL SYSTEM VIRTUAL MACHINE CICS VM IBM 20 October 1987 Archived from the original on 2 April 2016 Retrieved 2 April 2016 Babcock Charles 2 November 1987 VM SP update eases migration Computerworld Vol 21 no 44 IDG Enterprise pp 25 31 ISSN 0010 4841 Archived from the original on 31 March 2017 Retrieved 30 March 2017 US IBM CICS Transaction Server CICS TS for OS 390 www ibm com 3 February 2004 Retrieved 7 May 2022 US IBM CICS Transaction Server CICS TS for OS 390 www ibm com 3 February 2004 Retrieved 7 May 2022 US IBM CICS Transaction Server CICS TS for OS 390 www ibm com 3 February 2004 Retrieved 7 May 2022 CICS TS for z OS V2 www ibm com 23 May 2001 Retrieved 13 May 2022 IBM CICS Transaction Server for z OS V2 2 Delivers Major Value to All CICS Customers www ibm com 4 December 2001 Retrieved 7 May 2022 IBM CICS Transaction Server for z OS V2 3 advances towards on demand business www ibm com 28 October 2003 Retrieved 7 May 2022 IBM CICS Transaction Server for z OS V3 1 offers improved integration application transformation www ibm com 30 November 2004 Retrieved 7 May 2022 CICS Transaction Server for z OS V3 2 delivers significant innovation for application connectivity www ibm com 27 March 2007 Retrieved 7 May 2022 IBM US Announcement Letter www ibm com 28 April 2009 Retrieved 7 May 2022 IBM US Announcement Letter www ibm com 5 April 2011 Retrieved 7 May 2022 IBM CICS Transaction Server for z OS V5 1 delivers operational efficiency and service agility with cloud enablement www ibm com 3 October 2012 Retrieved 7 May 2022 IBM CICS Transaction Server for z OS V5 2 takes service agility operational efficiency and cloud enablement to a new level www ibm com 7 April 2014 Retrieved 7 May 2022 IBM CICS Transaction Server for z OS V5 3 delivers advances in service agility operational efficiency and cloud enablement with DevOps www ibm com 5 October 2015 Retrieved 7 May 2022 IBM CICS Transaction Server for z OS V5 4 delivers unparalleled mixed language application serving www ibm com 16 May 2017 Retrieved 7 May 2022 IBM CICS Transaction Server for z OS V5 5 delivers support for Node js and further enhancements to CICS Explorer systems management and security www ibm com 2 October 2018 Retrieved 7 May 2022 IBM CICS Transaction Server for z OS V5 6 delivers significant improvements to the developer experience security resilience and management www ibm com 7 April 2020 Retrieved 6 May 2022 IBM CICS Transaction Server for z OS 6 1 delivers significant improvements in the areas of developer productivity security and management www ibm com 5 April 2022 Retrieved 6 May 2022 IBM Corporation 1972 Customer Information Control System CICS Application Programmer s Reference Manual PDF Archived PDF from the original on 29 May 2019 Retrieved 4 January 2016 Command CICS IBM Archived from the original on 15 June 2021 Retrieved 22 April 2018 IBM CICS Transaction Server for z OS V5 6 delivers significant improvements to the developer experience security resilience and management 7 April 2020 Archived from the original on 10 July 2020 Retrieved 9 July 2020 IBM Corporation Basic mapping support CICS Information Center Archived from the original on 3 January 2013 IBM 13 September 2010 CICS Transaction Server glossary CICS Transaction Server for z OS V3 2 IBM Information Center Boulder Colorado Archived from the original on 1 September 2013 Retrieved 12 December 2010 IBM Archives Thermal conduction module www 03 ibm com 23 January 2003 Archived from the original on 20 July 2016 Retrieved 1 June 2018 IMS Context IMS Chichester UK John Wiley amp Sons Ltd 2009 pp 1 39 doi 10 1002 9780470750001 ch1 ISBN 9780470750001 IBM Knowledge Center MQ for zOS www ibm com 11 March 2014 Archived from the original on 7 August 2016 Retrieved 1 June 2018 Vijayan Jaikumar Amdahl gives up on mainframe business Computerworld Archived from the original on 3 November 2018 Retrieved 1 June 2018 Hitachi exits mainframe hardware but will collab with IBM on z Systems Archived from the original on 13 June 2018 Retrieved 1 June 2018 IBM Knowledge Center publib boulder ibm com Archived from the original on 15 June 2021 Retrieved 2 February 2021 IBM Corporation 1975 Customer Information Control System CICS System Programmer s Reference Manual PDF Archived PDF from the original on 17 February 2011 Retrieved 21 November 2012 a b IBM Corporation 1977 Customer Information Control System Virtual Storage CICS VS Version 1 Release 3 Introduction to Program Logic Manual PDF Archived from the original PDF on 17 February 2011 Retrieved 24 November 2012 CICS An Introduction PDF IBM Corporation 8 July 2004 Retrieved 20 April 2014 External links EditOfficial website Why to choose CICS Transaction Server for new IT projects IBM CICS whitepaper IBM Software CICS 35 year Anniversary 2004 at the Wayback Machine archived February 4 2009 Support Forum for CICS Programming CICS User Community website for CICS related news announcements and discussions Bob Yelavich s CICS focused website This site uses frames but on high resolution screens the left hand frame which contains the site index may be hidden Scroll right within the frame to see its content at the Wayback Machine archived February 5 2005 Retrieved from https en wikipedia org w index php title CICS amp oldid 1126216850, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.