SAP recently released onenew versionCPI with some new features including local JDBC connection. This is a long-awaited feature that will be used in many use cases. This blog post covers the steps you need to take to configure and test connectivity in an end-to-end scenario.
Use:Some steps are skipped because the documentation and the various blog posts already describe them in sufficient detail. Anyway, all links to blog posts and documentation are provided.
- Cloud Connector-Konfiguration
- Configuration of the data source
- iFlow configuration
- E2E test scenario
This section covers the configuration of internal system mapping, which allows incoming external (CPI) calls to be redirected to the correct internal host port/URL.
The initial configuration of the Cloud Connector can be found in this entry:How to configure the connection to the cloud.
Once everything is set up on the local server, it's time to add the mapping to connect the cloud to the local system. As you can see in the screenshot below I added a new TCP connection.
Local cloud access control table
The new incoming TCP connection contains this data:
|Typ des Back-Ends||Non-SAP system|
|virtual host||<Virtual Host URL>² (e.g. mssqlserver)|
|port virtual||<Virtual host port> (usually 1433)|
|internal host||<the URL of your internal host system> (e.g. your-server.com)|
|internal port||<your internal host port> (usually 1433)|
|Description||<up to you>|
|Check-Interner Host||Enabled/Disabled (You can always check it manually later)|
¹TCP is the protocol used for standard JDBC connections.
²You can add any virtual host address as this is the one you will be calling from CPI. In this scenario, for simplicity, we replicated the same address name as the internal host, butIt's a good recommendation to set a different naming convention so as not to reveal back-end details.
Mapping of virtual to internal system data
Now that the Cloud Connector is configured on the CPI tenant, it's time to add the MS SQL Server data source on the CPI.
You just need to follow a few simple steps to set up your JDBC data source:
2.1.Unload the SQL Server driver
2.2.Load/deploy driver in CPI JDBC stuff
2.3.Create the JDBC data source
These steps are explained in more detail in this section by SAPJDBC-Materialverwaltung.
2.1. Unload the SQL Server driver
follow thisshortcutand extract the ZIP file, you can see something similar to the screenshot below:
MS SQL Server driver folder
2.2. Load/deploy driver in CPI JDBC stuff
Now go to the CPI booth and follow Operations Link > JDBC Material > JDBC Driver.
On this screen you should be able to upload new drivers if you have the right roles. Click Add and a pop-up window will appear. Select the database type as Microsoft SQL Server, select the JAR file and click Mount.
Select database type
Note: I chose mssql-jdbc-8.4.1.jre8.jar
After the driver has been deployed, a new line will be added in the JDBC driver as shown in the screenshot below.
JDBC driver implemented successfully
It will take a few seconds for the status to change to 'Started', but once that has happened you don't need to do anything else in this menu.
23. Create the JDBC data source
After the driver has been implemented, the next step is to create the JDBC data source that iFlow will use. Change the tab to JDBC Data Source and click the Add button. A new form will appear that needs to be filled out.
The JDBC data source configuration contains these fields:
|Name||<The name of the data source> (e.g. TEST_DATA_SOURCE)|
|database type||Microsoft SQL-Server|
|user||<SQL Server database username>|
|password||<password of selected user>|
|Repeat password||<As above>|
|URL of JDBC¹||jdbc:sqlserver://<virtual host>:<virtual port>;DatabaseName=<schema name>;|
|Location ID||<Only if you have configured more than one Cloud Connector>|
|Cloud-Connector||Enabled (must be enabled to go through the Cloud Connector)|
¹Note that the virtual host and virtual port must match what you previously configured in the Cloud Connector
The result will look something like this:
Add new data source
After filling out all the required fields, click Deploy and the configuration will be deployed and the data source will be created. The screenshot below shows what it should look like after deployment. The data source shows as "Saved" after the initial deployment, but after a few seconds it should show as "Deployed".
Data source added successfully
Everything is now set up to consume data from a CPI iFlow database over the new connection. This section does not cover how to create the package/iFlow and we should ignore those steps.
Once the iFlow is created, connect the sender to the exit point with an HTTPS connection and provide the URL endpoint as desired. Add a new request-response stage and connect it to the SQL Server receiver.
This connection allows us to invoke the process whenever we want. Just don't forget to add the required roles to your user (or the ones you use to invoke the process, we'll need them later).
This connection passes whatever is in the CPI body and attempts to process it. Because we exposed the process as HTTP, the result of each call is returned to the caller.
The entire iFlow looks like the screenshot below:
Once everything is set up in iFlow, save and deploy the process. If all goes well, you should have an endpoint that represents your iFlow.
If you want to get the entry point of the integration process, you can go to Process View > Manage Integration Content > All. Select your iFlow and get the endpoint that should be there.
Classic option to get the last point
There is another option to get the url if you useConVista CPI-Hilfstool. Just click Get Info when you're in iFlow and you'll see the URL there.
Note: This process followsRequest-Response Patternand if you want to learn more about patterns, you can follow my colleagueBhalchandra Wadekarwith his series ofblog entriesoneBusiness Pattern (EIPinCPI).
Once the iFlow is implemented, you can use the endpoint with Postman or another REST client tool (in this case I'm using Postman).
Insert postman call
As you can see, the message sent above shows that you can send any SQL statement and it will be executed against the database. After the insert is triggered, you can see that the result of a select statement also returns some values.
I will add more problems as I find them and how to solve them. From what I've seen, these are the main ones I've seen.
- If you have this problem on iFlow:
com.sap.it.rt.adapter.http.api.exception.HttpResponseException: An internal server error has occurred: TCP/IP connection to host localhost, port 1105 failed. Error: "The driver received an unexpected response before logon. Please check the connection properties and make sure there is an instance of SQL Server running on the host and accepting TCP/IP connections on the port. This driver can only be used with SQL Server 2005 or later.".
–> You have to change themTCP-SSLATCP
- If you have this problem on iFlow:
com.sap.it.rt.adapter.http.api.exception.HttpResponseException: An internal server error has occurred: Error getting data source service: ServiceReference is null.
-> You have an incorrect JDBC URL in the JDBC data source configuration and it is probably not implemented correctly. you don't have itPuertoset to the JDBC data source.
- If you have this problem on iFlow:
com.sap.it.rt.adapter.http.api.exception.HttpResponseException: An internal server error has occurred: port number 1108/CPI is invalid.
-> You have an incorrect JDBC URL in the JDBC data source configuration and it is probably not implemented correctly. you haven't";DatabaseName=<database schema>;' added or not added correctly in the JDBC URL.
- If you areget no answerfrom the database:
-> See thatrespondervonDavid Ruiz de Azua
Disclaimer: The settings described in this post are for experimental purposes only and some of the settings may require additional security verification.
This is my first post and it might contain some errors, any feedback is welcome.