Exposing the Vessel Scheduling as a REST Service
Many Operations Research applications employ complex, powerful algorithms that perform sequential tasks, such as generating candidate data, formulating a mathematical program, and finding an optimal solution.
This core algorithmic process can be easily exposed as a REST API service, allowing external client applications—whether internal systems, web frontends, or other services—to submit input data and receive optimal solutions.
A service in an AIMMS model is defined simply by associating a service name with an AIMMS procedure. A key feature is the flexibility in data formats (like JSON, XML, or Excel) used for the input (request body) and output (response body).
This article details the general process of service implementation in AIMMS:
Defining and implementing the service procedure.
Understanding the procedure’s logic, focusing on achieving statelessness for reliable, concurrent execution.
Running and controlling the service in different environments (AIMMS IDE,
AimmsCmd, AIMMS Cloud).
Defining and Implementing the Service
Coding the Service Procedure
A service is formally defined by setting the dex::ServiceName property on an AIMMS procedure. The following code snippet shows the main procedure, which acts as the service entry point:
1Procedure pr_solveVesselSchedulingExcel {
2 Body: {
3 block
4
5 pr_initTask();
6
7 _sp_inp := dex::api::RequestAttribute( 'request-data-path' ) ;
8 _sp_out := dex::api::RequestAttribute( 'response-data-path' ) ;
9
10 pr_actuallySolveVesselSchedulingExcel( _sp_inp, _sp_out );
11
12 onerror _ep_err do
13
14 _sp_msg := errh::Message( _ep_err );
15 display _sp_msg ;
16
17 endblock ;
18
19 return 1 ;
20 }
21 dex::ServiceName: solveVesselSchedulingExcel; ! Line 21: The service name for external calls
22 StringParameter _sp_inp;
23 StringParameter _sp_out;
24 ElementParameter _ep_err {
25 Range: errh::PendingErrors;
26 }
27 StringParameter _sp_msg;
28}
Remarks:
Procedure Name (Line 1):
pr_solveVesselSchedulingExcelis executed when the service is called.Service Name (Line 21):
solveVesselSchedulingExcelis the external endpoint name used by clients.Data Paths (Lines 7, 8): The
dex::api::RequestAttributefunction is used to retrieve the temporary file paths where the request body (input) and response body (output) data reside.
Ensuring Statelessness with pr_initTask
When an application runs as a service, it’s crucial that each request (or “task”) runs independently. The procedure pr_initTask is executed at the start of every task to enforce statelessness by resetting the model identifiers.
The term “Data Model” refers to the core AIMMS identifiers (sets, parameters, variables, and constraints) that represent the objects in the real-world problem being modeled (e.g., s_cargoes, s_vessels).
These must be cleared before a new task begins. In contrast, “application management identifiers” (e.g., WebUI and PRO library identifiers, logging paths) should be left untouched.
1Procedure pr_initTask {
2 Body: {
3 ! Reset the data model (clear model data)
4 empty s_cargoes, s_vessels, s_locations, s_calc_feasibleRoutes ;
5
6 ! Clean up any dynamically generated mathematical programs.
7 _ep_gmp := first( AllGeneratedMathematicalPrograms );
8 while _ep_gmp do
9 gmp::Instance::Delete( _ep_gmp );
10 endwhile ;
11
12 ! Other cleanups
13 StringGarbageCollect();
14 CleanDependents();
15 }
16 ElementParameter _ep_gmp {
17 Range: AllGeneratedMathematicalPrograms;
18 }
19}
The Core Logic: pr_actuallySolveVesselSchedulingExcel
This procedure handles the business logic: reading the input file, executing the optimization, and writing the results to the output file.
1Procedure pr_actuallySolveVesselSchedulingExcel {
2 Arguments: (sp_inp,sp_out);
3 Body: {
4 dex::AddMapping(
5 mappingName : "ImportDataSet",
6 mappingFile : "Mappings/Generated/ImportDataSet-Excel.xml");
7
8 p_vesselVelocity := 37.04 [km/hour]; ! Line 10: Example of model parameter setting
9
10 if dex::ReadFromFile( ! Line 12: Read request data from temporary file
11 dataFile : sp_inp,
12 mappingName : "Generated/ImportDataSet-Excel") then
13
14 ! Activate all master data
15 bp_activeCargoes(i_cargo):= 1;
16 bp_activeVessels(i_vessel) := 1;
17 bp_activeLocations(i_loc) := 1;
18 endif ;
19
20 pr_calculateRoutesAndCost(ep_routeCalculationImplementation: ep_selectedRouteCalculationImplementation );
21
22 solve mm::mp_vesselScheduling;
23
24 ! Post-Execution: Process and structure results
25 mm::pr_post_vesselResults();
26 mm::pr_post_cargoResults();
27 mm::pr_post_routeResults();
28
29 dex::WriteToFile( ! Line 29: Write response data to temporary file
30 dataFile : sp_out,
31 mappingName : "Generated/ExportDataSet-Excel",
32 pretty : 1);
33 }
34 DeclarationSection Argument_declarations {
35 StringParameter sp_inp { Property: Input; }
36 StringParameter sp_out { Property: Input; }
37 }
38}
Remarks:
Data In (Line 12): The
dex::ReadFromFilefunction uses the input path (sp_inp) and a defined data mapping (ImportDataSet-Excel) to load the request data into the model identifiers.Data Out (Line 29): The
dex::WriteToFilefunction uses the output path (sp_out) and a second data mapping (ExportDataSet-Excel) to write the solution data back, which is then sent as the response.
Service Management and Execution
Starting and Stopping the Service
How you manage the service depends on the execution environment:
Environment |
Management Method |
Purpose |
|---|---|---|
AIMMS IDE |
Manually call |
For development and testing purposes. |
AimmsCmd / Docker |
Use |
Starts the service headless, providing fine control over resource management. |
AIMMS Cloud |
Automatic. No manual action is needed. |
The service is automatically provisioned and started when a task is posted. |
Controlling Resources in Headless Mode
When using AimmsCmd or Docker via the dex::api::RESTServiceHandler, service execution is automatically managed by timeout or maximum request limits. You can control this behavior using the following session arguments:
dex::api::RESTServiceMaxRequests: Sets the maximum number of requests before the service shuts down.dex::api::RestServiceMinTimeout: The minimum amount of time the service will run (in seconds).dex::api::RESTServiceTimeout: The maximum amount of time the service will run (in seconds).
Testing the Service using Python
Testing a service involves sending a request to the exposed endpoint. A common and robust approach is to use a Python script leveraging the popular requests library.
Since AIMMS services, particularly on the Cloud, often involve asynchronous processing (meaning the solution takes time), the client logic typically follows three steps:
Submit the Task (POST): Send the input data to the service endpoint. The server responds immediately with a unique Task ID.
Poll the Status (GET): Repeatedly check the status of the Task ID until the state is “completed” or “failed.”
Obtain the Response (GET): Once completed, retrieve the final results using the Task ID.
Python Client Code Flow
A conceptual Python client script using the requests library looks like this:
1import requests
2import json
3import time
4
5# --- Configuration ---
6BASE_URL = 'https://[your-account].aimms.cloud/pro-api/v1'
7API_KEY = 'YOUR_SECRET_API_KEY'
8APP_NAME = 'VesselSchedulingApp'
9SERVICE_NAME = 'solveVesselSchedulingExcel'
10
11headers = {'Authorization': f'Bearer {API_KEY}'} # Or {'apiKey': API_KEY} depending on PRO version
12
13# 1. Submit the Task (POST)
14url_submit = f'{BASE_URL}/tasks/{APP_NAME}/latest/{SERVICE_NAME}'
15
16# Assuming 'input_data.json' is the file containing the vessel/cargo data
17with open('input_data.json', 'r') as f:
18 request_body = json.load(f)
19
20print("Submitting task...")
21response_submit = requests.post(url_submit, json=request_body, headers=headers)
22response_submit.raise_for_status() # Check for HTTP errors (4xx or 5xx)
23task_id = response_submit.json()['id']
24print(f"Task submitted. ID: {task_id}")
25
26# 2. Poll the Status (GET)
27url_poll = f'{BASE_URL}/tasks/{task_id}'
28status = ""
29while status not in ['completed', 'failed']:
30 time.sleep(5)
31 response_poll = requests.get(url_poll, headers=headers)
32 response_poll.raise_for_status()
33 status = response_poll.json()['state']
34 print(f"Current status: {status}")
35
36# 3. Obtain the Response (GET)
37if status == 'completed':
38 url_response = f'{BASE_URL}/tasks/{task_id}/response'
39 response_final = requests.get(url_response, headers=headers)
40 response_final.raise_for_status()
41
42 # The response body is the data written by AIMMS's dex::WriteToFile
43 solution_data = response_final.json()
44 print("\n--- Solution Retrieved ---")
45 # Process and display results (e.g., solution_data['OptimalRoutes'])
46else:
47 print("Task failed. Check AIMMS Cloud logs for details.")
Deployment and Testing on AIMMS Cloud
The AIMMS Cloud offers a fully managed PRO environment, simplifying the entire deployment lifecycle.
Deployment
Once your AIMMS project is complete, you deploy it using the AIMMS Developer environment by creating an end-user package (an .aimmspack file) and uploading it to the Cloud Portal.
Export: Use the AIMMS Developer menu () to create the
.aimmspackfile.Publish: Log into the AIMMS Cloud. In the Apps section, publish the new application version, providing the
.aimmspackfile.Service Activation: Because the procedure
pr_solveVesselSchedulingExcelhas thedex::ServiceNameattribute, the REST service is automatically exposed upon successful publication. The Cloud handles all necessary infrastructure setup, including load balancing and routing.
Testing on the Cloud
Testing is achieved by calling the endpoint using the AIMMS PRO REST API. The base URL structure for calling a published service endpoint is:
https://[Your-Account].aimms.cloud/pro-api/v1/tasks/[AppName]/[AppVersion]/[ServiceName]
By posting your input data to this unique URL (as demonstrated in the Python client example), the Cloud automatically:
Queues the task.
Launches a dedicated AIMMS session (a containerized environment, or “pod”).
Executes the associated procedure (e.g.,
pr_solveVesselSchedulingExcel).Manages the session lifecycle ensuring it stops after the task is complete, and making the final output available via the task ID.
This automated management eliminates the need for manual service start/stop commands or complex resource configuration, making the Cloud the preferred environment for production-scale service execution.