This invention relates, in general, to processing within a multi-server environment, and in particular, to managing orphaned requests in the multi-server environment.BACKGROUND OF THE INVENTION
In a typical multi-server environment, there are a plurality of request servers that receive work requests from requestors, such as client computers, and place those requests on a common queue. Then, a plurality of worker servers take the work requests off the queue, process the requests and return responses to the queue. Thereafter, the request servers that initially received the requests take their respective responses from the queue and return the responses to the original requestors.
If a request server fails and there are still pending requests for the request server, then a message is sent to the one or more requestors of those pending requests indicating that the requests are aborted. However, the worker servers may continue to process the requests and return responses to the queue. Then, if the request server restarts, either the request server will mistake old responses for new responses, and thus, give incorrect responses to new requests, or old responses will be left indefinitely on the queue, eventually clogging the queue.SUMMARY OF THE INVENTION
Based on the foregoing, a need exists for a capability to more efficiently manage orphaned requests. In particular, a need exists for a capability that automatically detects that there are orphaned requests for a particular request server and handles those requests without affecting the requestors, and while allowing the request server to continue processing new requests.
The shortcomings of the prior art are overcome and additional advantages are provided through the provision of a computer program product for managing orphaned requests in a multi-server processing environment. The computer program product includes a storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes, for instance, automatically determining, by a request server executing on a processor of the multi-server processing environment, in response to recovery of the request server in which the request server is to process a new generation of one or more requests, whether there are one or more previous generations of requests of the request server that are outstanding; and immunizing, in response to the automatically determining indicating there are one or more previous generations of requests, the request server from the one or more generations of requests, wherein the immunizing includes selecting, by the request server, one or more messages associated with one or more requests from the one or more previous generations of requests; and processing the one or more messages, the processing including deleting one or more messages or saving one or more messages; wherein concurrent to the immunizing one or more requests of the new generation of one or more requests are capable of being processed by the request server.
Methods and systems relating to one or more aspects of the present invention are also described and claimed herein. Further, services relating to one or more aspects of the present invention are also described and may be claimed herein.
Additional features and advantages are realized through the techniques of the present invention. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed invention.BRIEF DESCRIPTION OF THE DRAWINGS
One or more aspects of the present invention are particularly pointed out and distinctly claimed as examples in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
DETAILED DESCRIPTION OF THE INVENTION
In accordance with an aspect of the present invention, in response to initialization of a request server, either for the first time or in recovery, the request server automatically detects if there are previous generations of requests outstanding for the request server, and if so, immunizes itself against those requests. That is, the request server protects itself from the harmful effects of previous generations of requests. As one example, the request server starts one or more threads that are designed to gather messages associated with requests from previous incarnations of the server and processes those messages without the requestors knowing of the processing and without affecting the requestors. The messages associated with the requests include requests and/or responses to the requests, and the processing includes, for instance, disposing of those requests and/or responses, or saving the responses for some time period for potential recovery, as desired or appropriate. No matter how long the in-process messages take, the immunization lasts until all requests of a given generation are processed. If the server goes down again, this new generation is immunized, along with any other previous generations that have not been completely disposed of. This processing is performed without any conventional logging. Further, while this request server is immunized, it continues processing new requests, as usual.
One embodiment of a multi-server processing environment 100 to incorporate and use one or more aspects of the present invention is described with reference to
In other examples, the client and/or server may be other types of computers or processors. Yet further, in another example, the client and server are executing on the same machine, such as the same zSeries® machine.
Server 104 includes, for instance, a request server 106 that receives requests from a requestor, such as client 102, and places those requests in a communication queue 108. Communication queue 108 is coupled to a plurality of worker servers 110, which take requests off the queue, process the requests, and then place responses to the requests back on the queue. Subsequently, the request server takes the responses off the queue and sends the responses to the originating requestor (e.g., client 102). Although one request server and three worker servers are depicted in
Further details regarding one example of server 104 are described with reference to
Virtual machine operating system 154 includes a common base portion 156 (called “CP” in the z/VM® operating system) and a plurality of virtual machines (or virtual servers) 158. During installation of the virtual machine operating system, the system administrator defines one or more virtual machines 158. These virtual machines include one or more request servers, as well as one or more worker servers, which perform the work requested by the request servers. The virtual machines are coupled to a shared memory 170 (e.g., within physical computer 150), which includes, for instance, a communication queue 172. The communication queue is a communication mechanism used between the request servers and the worker servers. When a request server receives a request 174 from a requestor, it places the request on the communication queue. A worker server then pulls the request off the communication queue, processes the request, and places a response back on the communication queue. Thereafter, the request server extracts the response off the communication queue, and provides the response back to the requestor.
In accordance with an aspect of the present invention, prior to placing request 174 on the communication queue, it is modified by the request server by adding a tag 176 to the request. Tag 176 indicates, for instance, the request server that owns the request and the generation of the request server. There are many possibilities for indicating the generation of the request server, including, for instance, the last date of outage of the request server, a numerical number that indicates how many times this request server has been initialized, or a number indicating the start date/time of the request server, etc. When a worker server retrieves the request, it leaves behind a copy of the tag, so that the request server can determine if a worker server has one of its requests.
When a request server is initialized, either for the first time or thereafter, it automatically checks if there are previous generations of outstanding requests and immunizes itself against those requests. For example, it starts one or more immunization threads that are designed to gather messages associated with previous incarnations of the request server and handle those messages, as appropriate. This processing, which is performed by the request server, is described in further detail with reference to
In one example, in response to initialization of a request server, STEP 200, a determination is made as to whether a previous server generation message is on the communication queue for this request server, INQUIRY 202. That is, a determination is made as to whether a message exists on the queue indicating there was a previous generation of the request server, and thus, there may be outstanding requests/responses for the request server.
Assuming this is the first initialization of the request server, and therefore, there is no previous server generation message on the queue, processing proceeds with STEP 204. In particular, any previous server generation messages which were taken off the queue for processing, in which in this case there are none, are placed back on the communication queue, as described in further detail below, STEP 204. Further, a server generation message is created for this generation of the request server, STEP 206. In one example, the server generation message includes a key indicating it is a server generation message, the userid of the request server, and a unique value that can distinguish this generation from others (e.g., the time this message is generated). The new server generation message is placed on the communication queue, STEP 208. Thereafter, the request server waits for client requests, STEP 210. When a client request is received, it processes the client request, STEP 212. Processing then continues with waiting for client requests, STEP 210. This processing continues until the request server ends.
Returning to INQUIRY 202, if there is a previous server generation message on the queue for this request server indicating that this request server has been re-initialized and there may be outstanding requests for an earlier generation of the request server, then a server generation message for this request server is taken off the communication queue, STEP 220. In response thereto, at least one immunization thread is started to gather messages (e.g., requests and/or responses) associated with requests for this generation, STEP 222. This is described in further detail below with reference to
If there are additional previous server generation messages on the queue for this request server, then another generation message is taken off the queue, STEP 220, and an immunization thread is started, STEP 222, as described above. However, if there are no more previous server generation messages on the queue for this request server, INQUIRY 202, then processing continues with putting the previous server generation messages that were taken off the queue in STEP 220 back on the queue, STEP 204, and creating a server generation message for this generation of request server, as described above.
Further details relating to immunizing the request server from orphaned requests are described with reference to
If there are no requests on the communication queue for the input generation, then a further determination is made as to whether there are any responses on the communication queue for the input generation, INQUIRY 310. As an example, this determination is made via the tags, which are also included with the responses. If there are responses, then processing continues with wait for response, STEP 306, which should be immediate. That is, since the response is indicated as being on the communication queue, the request server takes the response and processes that response, which includes, for instance, removing the response from the queue and/or saving the response for a defined time, STEP 308. Thereafter, processing continues with INQUIRY 304.
Returning to INQUIRY 310, if there are no requests or responses on the communication queue for the input generation, the server generation message corresponding to this generation is removed from the communication queue, STEP 312. The thread processing is then complete, STEP 314.
Described in detail above is an efficient technique for immunizing request servers from orphaned requests in a multi-server processing environment, while enabling the request server to continue processing new requests and without confusing requestors.
One or more aspects of the present invention can be included in a computer program product to facilitate one or more aspects of the present invention. The computer program product includes a storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing one or more of the capabilities of the present invention.
In one example, an article of manufacture (e.g., one or more computer program products) having, for instance, computer readable media includes one or more aspects of the present invention. The media has therein, for instance, computer readable program code means or logic (e.g., instructions, code, commands, etc.) to provide and facilitate the capabilities of the present invention. The article of manufacture can be included as a part of a computer system or sold separately.
One example of an article of manufacture or a computer program product incorporating one or more aspects of the present invention is described with reference to
A sequence of program instructions or a logical assembly of one or more interrelated modules defined by one or more computer readable program code means or logic direct the performance of one or more aspects of the present invention.
Moreover, one or more aspects of the present invention can be provided, offered, deployed, managed, serviced, etc. by a service provider who offers management of customer environments. For instance, the service provider can create, maintain, support, etc. computer code and/or a computer infrastructure that performs one or more aspects of the present invention for one or more customers. In return, the service provider can receive payment from the customer under a subscription and/or fee agreement, as examples. Additionally or alternatively, the service provider can receive payment from the sale of advertising content to one or more third parties.
In one aspect of the present invention, an application can be deployed for performing one or more aspects of the present invention. As one example, the deploying of an application comprises providing computer infrastructure operable to perform one or more aspects of the present invention.
As a further aspect of the present invention, a computing infrastructure can be deployed comprising integrating computer readable code into a computing system, in which the code in combination with the computing system is capable of performing one or more aspects of the present invention.
As yet a further aspect of the present invention, a process for integrating computing infrastructure comprising integrating computer readable code into a computer system may be provided. The computer system comprises a computer readable medium, in which the computer medium comprises one or more aspects of the present invention. The code in combination with the computer system is capable of performing one or more aspects of the present invention.
Advantageously, request servers are immunized from orphaned requests of the request servers without requiring conventional logging and while the request servers continue processing new requests. This technique is particularly useful in those situations in which it takes little time (e.g., low single digits of seconds) to recover the request servers and worker servers.
Advantageously, each time a request server receives a request, it attaches a message including the request server id and the current generation. The current generation can be, for example, the last date of outage of the request server. If the request server fails again, a message is sent to the clients or original requestors indicating that their pending requests have been aborted. When the request server restarts, it checks the queue for both requests and responses which identify the request server and the previous generations, and processes them (e.g., deletes, saves for future recovery) without knowledge of the requestors. Thus, incorrect responses are not sent to requestors and/or old responses are not left on the queue.
In one example, the request servers are virtual servers executing within a multi-server processing environment. Further details regarding virtual servers are described in U.S. Pat. No. 7,299,468, entitled “Management of Virtual Machines to Utilize Shared Resources,” Casey et al., issued Nov. 20, 2007; and U.S. Pat. No. 7,490,324, entitled “System and Method for Transferring Data Between Virtual Machines or Other Computer Entities,” Shultz et al., issued Feb. 10, 2009, each of which is hereby incorporated herein by reference in its entirety.
Although various embodiments are described above, these are only examples. For example, the generation messages, tags and/or generation indicators can be other than described herein. As examples, they can include more, less or different information. Further, the immunizing processing can be performed by entities other than threads. Moreover, servers may be based on architectures other than the z/Architecture®. Yet further, there may be more or fewer clients and/or servers than described herein, and/or there may be other types of clients and/or servers.
Further, other types of computing environments can benefit from one or more aspects of the present invention. As an example, an environment may include an emulator (e.g., software or other emulation mechanisms), in which a particular architecture (including, for instance, instruction execution, architected functions, such as address translation, and architected registers) or a subset thereof is emulated (e.g., on a native computer system having a processor and memory). In such an environment, one or more emulation functions of the emulator can implement one or more aspects of the present invention, even though a computer executing the emulator may have a different architecture than the capabilities being emulated. As one example, in emulation mode, the specific instruction or operation being emulated is decoded, and an appropriate emulation function is built to implement the individual instruction or operation.
In an emulation environment, a host computer includes, for instance, a memory to store instructions and data; an instruction fetch unit to fetch instructions from memory and to optionally, provide local buffering for the fetched instruction; an instruction decode unit to receive the fetched instructions and to determine the type of instructions that have been fetched; and an instruction execution unit to execute the instructions. Execution may include loading data into a register from memory; storing data back to memory from a register; or performing some type of arithmetic or logical operation, as determined by the decode unit. In one example, each unit is implemented in software. For instance, the operations being performed by the units are implemented as one or more subroutines within emulator software.
Further, a data processing system suitable for storing and/or executing program code is usable that includes at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements include, for instance, local memory employed during actual execution of the program code, bulk storage, and cache memory which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
Input/Output or I/O devices (including, but not limited to, keyboards, displays, pointing devices, DASD, tape, CDs, DVDs, thumb drives and other memory media, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the available types of network adapters.
The capabilities of one or more aspects of the present invention can be implemented in software, firmware, hardware, or some combination thereof. At least one program storage device readable by a machine tangibly embodying at least one program of instructions executable by the machine to perform the capabilities of the present invention can be provided.
The flow diagrams depicted herein are just examples. There may be many variations to these diagrams or the steps (or operations) described therein without departing from the spirit of the invention. For instance, the steps may be performed in a differing order, or steps may be added, deleted, or modified. All of these variations are considered a part of the claimed invention.
Although embodiments have been depicted and described in detail herein, it will be apparent to those skilled in the relevant art that various modifications, additions, substitutions and the like can be made without departing from the spirit of the invention and these are therefore considered to be within the scope of the invention as defined in the following claims.