Quantcast
Channel: A Portal to a Portal
Viewing all 1851 articles
Browse latest View live

IBM Integration Bus and Cloudant - Baby steps ...

$
0
0
I'm starting to explore the possibilities of integration between IBM Integration Bus ( and thus IBM AppConnect Enterprise ) and IBM Cloudant, both of which are running on the IBM Cloud Platform ( nee Bluemix ).

Having spun up an instance of Cloudant, and worked out how to "feed" it using RESTful APIs, via cURL, I now wanted to find out how one can achieve integration between IIB and Cloudant.

This was of immense help: -

IBM Integration Bus v10 tutorials on Github


specifically this tutorial: -

Using a LoopBack Request node to insert data into a Cloudant database

and this: -

Using some of the more advanced features of the LoopBackRequest node


Tutorial: Installing LoopBack connectors


IIB V10.0.0.6 Loopback Request node using MongoDB and Cloudant tutorial


One of the key requirements is to install a Loopback Connector for IBM Cloudant, as per this exception: -

BIP2087E: Integration node 'TESTNODE_Dave' was unable to process the internal configuration message. 

The entire internal configuration message failed to be processed successfully. 

Use the messages following this message to determine the reasons for the failure. If the problem cannot be resolved after reviewing these messages, contact your IBM Support center. Enabling service trace may help determine the cause of the failure.
BIP4041E: Integration server 'default' received an administration request that encountered an exception. 

While attempting to process an administration request, an exception was encountered. No updates have been made to the configuration of the integration server. 

Review related error messages to determine why the administration request failed.
BIP3879E: The LoopBackRequest node received an error from LoopBack when attempting to connect to the data source name 'CLOUDANT'. Detail: ' WARNING: LoopBack connector "cloudant" is not installed as any of the following modules:   ./connectors/cloudant loopback-connector-cloudant  To fix, run:      npm install loopback-connector-cloudant --save '. 

An error was received when establishing the connection to the configured LoopBack connector data source. 

Check the error information to determine why the error occurred and take the appropriate action to resolve the error. The error detail is a LoopBack connector error message.
BIP2871I: The request made by user 'Dave-PC\Dave' to 'change' the resource '/LoopBack/Loopback_Cloudant' of type 'MessageFlow' on parent 'default' of type 'ExecutionGroup' has the status of 'FAILED'.

Having downloaded/installed NodeJS ( node-v8.11.2-x64.exe ) on the Windows VM, I tried/failed to work out how to resolve the missing dependency.

This gave me the clue: -




Preparing the Integration Node runtime environment to connect to Cloudant


Namely, I needed to navigate to the appropriate directory under the IIB Toolkit: -

cd C:\ProgramData\IBM\MQSI\node_modules

notepad C:\ProgramData\IBM\MQSI\package.json

{
  "name": "system32",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "dependencies": {
    "loopback-connector-cloudant": "^2.0.5"
  },
  "devDependencies": {},
  "scripts": {
    "test": "echo \"Error: no test specified\"&& exit 1"
  },
  "author": "",
  "license": "ISC"
}

npm install loopback-connector-cloudant --save

and, for good belt n' braces reasons, restart the Integration Node.

Once I did this, I was able to run the flow using the Flow Exerciser: -


and insert data into Cloudant using the Create operation: -


or retrieve data from Cloudant using the Retrieve operation: -


which is nice :-)

Final consideration, I needed to create a datasources.json configuration file here: -

C:\ProgramData\IBM\MQSI\connectors\loopback

{
"CLOUDANT":
{
"name": "bluey",
"connector": "cloudant",
"username": "2ha2294a-fd6f-42a5-a220-a4221ef51df0-bluemix",
"password": "5322e20e538422a92f2eaca69db094883125cdfa4db28c20dd82bc3662161108",
"url": "https://2ha2294a-fd6f-42a5-a220-a4221ef51df0-bluemix:5322e20e538422a92f2eaca69db094883125cdfa4db28c20dd82bc3662161108@2ha2294a-fd6f-42a5-a220-a4221ef51df0-bluemix.cloudant.com",
"database": "bluey"
}
}




Cloudant - Continuing to tinker

$
0
0
I'm importing data from a Comma Separated Value (CSV) file into Cloudant, using the most excellent CouchDB tools provided by my IBM colleague, Glynn Bird.

Having created a CSV: -

vi cartoon.csv 

id,givenName,familyName
1,Maggie,Simpson
2,Lisa,Simpson
3,Bart,Simpson
4,Homer,Simpson
5,Fred,Flintstone
6,Wilma,Flintstone
7,Barney,Rubble
8,Betty,Rubble


( with due respect to the creators and owners of The Simpsons and The Flintstones )

I setup my environment: -

export ACCOUNT=0e5c777542c5e2cc2418013429e0824f-bluemix:d088ff753c9e258add92e45128cd161d
acbffedbcec0c8f78b216368ba0503ab


export HOST=d088ff753c9e258add92e45128cd161d-bluemix.cloudant.com

export COUCH_URL=https://$ACCOUNT@$HOST

export COUCH_DATABASE="CARTOON"

export COUCH_DATABASE=`echo $COUCH_DATABASE | tr '[:upper:]''[:lower:]'`

export COUCH_DELIMITER=","

and created my database: -

curl -X PUT $COUCH_URL/$COUCH_DATABASE

and populated it: -

cat $COUCH_DATABASE.csv | couchimport

This worked well but …. my data had a system-generated _id field whereas I wanted to use my own ID field: -

{
  "_id": "e143bcd25bc620e6aa8f2adc206cf21c",
  "_rev": "1-0152a3e6867ad34da6e882a80f0fbeff",
  "id": "1",
  "givenName": "Maggie",
  "familyName": "Simpson"
}

{
  "_id": "82c1068c830759a904cfdd02ab41b980",
  "_rev": "1-6bbb94301323a3c3f6ff54f1c3c765e5",
  "id": "2",
  "givenName": "Lisa",
  "familyName": "Simpson"
}

Thankfully Glenn kindly advised me how to use a JavaScript function to mitigate this: -

vi ~/transform_cartoon.js

var transform = function(doc) {
  doc._id = doc.id
  delete doc.id
  return doc
}

module.exports = transform

which effectively assigns the _id field to the value of the id field ( as taken from the CSV ) and also drops the original id field.

I dropped the DB: -

curl -X DELETE $COUCH_URL/$COUCH_DATABASE

and recreated it: -

curl -X PUT $COUCH_URL/$COUCH_DATABASE

and then repopulated it: -

cat $COUCH_DATABASE.csv | couchimport --transform ~/transform_cartoon.js

and now we have this: -

{
  "_id": "1",
  "_rev": "1-0e77dbadefba2a95e5cde5bda2ecd695",
  "givenName": "Maggie",
  "familyName": "Simpson"

}

{
  "_id": "2",
  "_rev": "1-fc746edc394ac98b013b7788cc1cca5d",
  "givenName": "Lisa",
  "familyName": "Simpson"
}

If needed, I could modify my transform: -

var transform = function(doc) {
  doc._id = doc.id
  return doc
}

module.exports = transform

to avoid dropping the original id field, to give me this: -

{
  "_id": "1",
  "_rev": "1-0152a3e6867ad34da6e882a80f0fbeff",
  "id": "1",
  "givenName": "Maggie",
  "familyName": "Simpson"
}

{
  "_id": "2",
  "_rev": "1-6bbb94301323a3c3f6ff54f1c3c765e5",
  "id": "2",
  "givenName": "Lisa",
  "familyName": "Simpson"
}

so I have choices :-) 

For more insights, please go here: -



macOS - Windows are off the screen ...

$
0
0
I had an issue earlier where my chosen Twitter client, Tweetbot, somehow wandered off the screen, lurking off to the right.

Whilst I could see Tweetbot when I hit the F4 key to open Mission Control, I couldn't actually get to it :-( 

Thankfully the internet ( what, all of it ? ) had the answer: -


OS X, particularly recent versions of the operating system, do a good job of corralling application windows by either not allowing a user to resize a window beyond the boundaries of the screen or by automatically snapping a window to a second display for those with multi-monitor setups. But sometimes — due to errors, bugs, or when disconnecting an external monitor — an application window can get "stuck" partially or completely outside of the visible area of the Mac's display, and getting it back can seem impossible. Thankfully, there's a quick and easy step you can take to automatically fix an off screen window in Mac OS X, and it's called Zoom.


If you can see the green zoom button, it's the best way to bring the missing portions of your OS X application window back into view. But what if it's the top of the window that's off screen, and you can't see the zoom button at all? In that case, you can achieve the same result via an option in the menu bar.

Simply select your desired application to make it active by click on its icon in the Dock (you should see the application's name in the top-left corner of your OS X Menu Bar, next to the Apple logo). Then, also in the Menu Bar, click the word Window and then Zoom. If you have multiple windows open in the same application, you can also select Zoom All to bring them all to the correct position at once.




Bottom line, I hit the Tweetbot icon in the Dock


and then chose Zoom from the Window menu


Now the app is back where it should be


Nice :-)

Munging data - removing duplicates from CSV files

$
0
0
Whilst fiddling with Cloudant yesterday: -


I hit an issue whereby I was trying / failing to upload data that contained duplicates: -

index,name
1,Dave
2,Bob
3,Barney
4,Homer
5,Bart
1,Dave
3,Barney


Note that this is an example file; the real data had 000s of rows :-(

cat duplo.csv | couchimport --transform transform_duplo.js 

  couchimport ****************** +0ms
  couchimport configuration +2ms
  couchimport {
  couchimport  "COUCH_URL": "https://****:****@7fb7794a-fd6f-47a5-a770-a3521ef51df0-bluemix.cloudant.com",
  couchimport  "COUCH_DATABASE": "duplo",
  couchimport  "COUCH_DELIMITER": ",",
  couchimport  "COUCH_FILETYPE": "text",
  couchimport  "COUCH_BUFFER_SIZE": 500,
  couchimport  "COUCH_JSON_PATH": null,
  couchimport  "COUCH_META": null,
  couchimport  "COUCH_PARALLELISM": 1,
  couchimport  "COUCH_PREVIEW": false,
  couchimport  "COUCH_IGNORE_FIELDS": []
  couchimport } +2ms
  couchimport ****************** +0ms
  couchimport { id: '1',
  couchimport   error: 'conflict',
  couchimport   reason: 'Document update conflict.' } +561ms
  couchimport { id: '3',
  couchimport   error: 'conflict',
  couchimport   reason: 'Document update conflict.' } +2ms

  couchimport Written ok:5 - failed: 2 -  (5) +0ms
  couchimport { documents: 5, failed: 2, total: 5, totalfailed: 2 } +0ms
  couchimport writecomplete { total: 5, totalfailed: 2 } +81ms
  couchimport Import complete +0ms


I wanted to strip out the duplicates ( of which there were MANY )

Thankfully the internet showed me how: -



So I did this: -

awk -F, '!seen[$1]++' duplo.csv > duplo_DEDUP.csv

which gave me this: -

index,name
1,Dave
2,Bob
3,Barney
4,Homer
5,Bart


cat duplo_DEDUP.csv | couchimport --transform transform_duplo.js 

  couchimport ****************** +0ms
  couchimport configuration +2ms
  couchimport {
  couchimport  "COUCH_URL": "https://****:****@7fb7794a-fd6f-47a5-a770-a3521ef51df0-bluemix.cloudant.com",
  couchimport  "COUCH_DATABASE": "duplo",
  couchimport  "COUCH_DELIMITER": ",",
  couchimport  "COUCH_FILETYPE": "text",
  couchimport  "COUCH_BUFFER_SIZE": 500,
  couchimport  "COUCH_JSON_PATH": null,
  couchimport  "COUCH_META": null,
  couchimport  "COUCH_PARALLELISM": 1,
  couchimport  "COUCH_PREVIEW": false,
  couchimport  "COUCH_IGNORE_FIELDS": []
  couchimport } +1ms
  couchimport ****************** +1ms
  couchimport Written ok:5 - failed: 0 -  (5) +489ms
  couchimport { documents: 5, failed: 0, total: 5, totalfailed: 0 } +1ms
  couchimport writecomplete { total: 5, totalfailed: 0 } +40ms
  couchimport Import complete +1ms

which is nice :-)

Yay for awk !

Build REST APIs using Swagger and IBM Integration Bus – IIB v10

Cloudant - Backups

$
0
0
Having invested a fair bit of time populating databases into Cloudant: -


I wanted to backup my databases, for safety's sake.

My IBM colleague, Glynn Bird, came to the rescue again: -


with his couchbackup tool.

It was a simple matter of installing the package: -

sudo npm install -g couchbackup

and then running it: -

export COUCH_DB=acctdb
couchbackup --db $COUCH_DB > $COUCH_DB.txt

I then got really clever, and used some script to extract the names of ALL my databases: -

curl -X GET -g $COUCH_URL/_all_dbs

which returned a JSON object comprising the names of my DBs: -

["account","cartoon","foobar","snafu"]

so I "merely" needed to remove the square brackets, the double-quotes and the commas.

Long story short, this is with what I ended up: -

declare -a arr=(`curl -s -X GET -g $COUCH_URL/_all_dbs | sed 's/[][]//g' | sed 's/,/ /g' | sed 's/"//g'`)
for i in "${arr[@]}"; do export COUCH_DB=$i; couchbackup --db $COUCH_DB > $COUCH_DB.txt; done

This pulls back the list of databases from Cloudant, removes the unwanted characters, and then runs the couchbackup command to extract each of them to a text file ….

Nice :-) 


IBM Operational Decision Manager 8.9 - Tuning Guide

$
0
0
This was recently published to the ODMDev space on IBM developerWorks: -


IBM ODM performance architects, Pierre-Andre Paumelle (paumelle@fr.ibm.com) and Nicolas Peulvast (peulvast@fr.ibm.com), have just released a new version of the IBM ODM Tuning Guide. The guide takes you on a tour of the key areas of IBM ODM and shows you how to get under the hood and improve the performance of ODM versions 8.9.0, 8.9.1, and 8.9.2.

The guide covers various "basic" and "advanced" topics, for example, you can learn how best to set up tracing in the Rule Execution Server, and you can find information on how to configure Solr on Decision Center. There is also help for you on migrating a Decision Server database, and how to optimize Decision Warehouse.

So roll up your sleeves, and get tuning. With this guide and a bit of elbow grease, you'll have ODM purring like a kitten in no time!

IBM HTTP Server (IHS) Performance and Tuning and Some Docs

$
0
0
I was having a useful chat with a colleague about threading and concurrency in IBM HTTP Server, which is based upon Apache.

I provided him with a few pertinent URLs: -

For IHS 8.5.X ( based upon Apache 2.2 )




For IHS 9.X ( based upon Apache 2.4 )




and these: -




Using IBM App Connect enterprise capabilities with IBM MQ on Cloud

$
0
0
A requirement similar to this: -

Your company wants to expose a custom JSON-based REST API to their developers for sending messages to a stock control application through an MQ queue that is hosted in MQ on IBM Cloud. The format of the messages that the application expects is a custom COBOL copy book structure. An integration solution needs to be constructed that receives a simple JSON REST request, converts the JSON to the COBOL structure and then sends that to the correct queue. The integration solution is imported to run in IBM App Connect on IBM Cloud (with a plan that provides enterprise capabilities).

has come up recently, on a project upon which my team are engaged.

Therefore, this is rather useful_ 


Worth a read …...

Cloudant - Fun with Indexing and Querying

$
0
0
So I was trying to resolve an issue for a colleague, who needed to use an $or operator.

He found that his query would take a very long time ( minutes ) and fail to return any results, searching through ~500K documents.

I tested the problem and, eventually, the solution, using my own data set: -

id,givenName,familyName
1,Maggie,Simpson
2,Lisa,Simpson
3,Bart,Simpson
4,Homer,Simpson
5,Fred,Flintstone
6,Wilma,Flintstone
7,Barney,Rubble
8,Betty,Rubble


In Cloudant, each document looks like this: -

{
  "_id": "1",
  "_rev": "1-0152a3e6867ad34da6e882a80f0fbeff",
  "id": "1",
  "givenName": "Maggie",
  "familyName": "Simpson"
}

{
  "_id": "2",
  "_rev": "1-6bbb94301323a3c3f6ff54f1c3c765e5",
  "id": "2",
  "givenName": "Lisa",
  "familyName": "Simpson"
}

etc.

So this was the query I was using: -

{
  "selector": {
     "familyName": "Simpson",
     "givenName": {
        "$or": [
           {
              "givenName": "Maggie"
           },
           {
              "givenName": "Lisa"
           }
        ]
     }
  },
  "fields": [
     "givenName",
     "familyName"
  ],
  "sort": [
     {
        "givenName": "asc"
     }
  ]
}

In my simple brain, this would return documents for Maggie and Lisa, out of the eight in my database.

I'd previously created this index: -

{
   "index": {
      "fields": [
         "givenName"
      ]
   },
   "name": "givenName-json-index",
   "type": "json"
}

When I ran my query, I got nothing back: -


apart from this statistic: 


Thankfully I found a smart person on our Cloudant Slack channel, who told me where I was going wrong: -


So I changed my query: -

{
   "selector": {
      "$or": [
         {
            "givenName": "Maggie"
         },
         {
            "givenName": "Lisa"
         }
      ]
   },
   "fields": [
      "givenName",
      "familyName"
   ],
   "sort": [
      {
         "givenName": "asc"
      }
   ]
}

and now I see data: -


Yay!



This was also very useful: -


Note to self - setting AND keeping alternate boot device on  macOS

$
0
0
Having added a shiny new 256 GB SSD drive to my  Mac Mini ( this is a USB3 device as I didn't fancy opening up the Mini and replacing the in-built Fusion drive ), I needed a way to make the drive bootable.

I'd already used SuperDuper to clone the old drive to the new drive.

I just needed to work out how to (a) boot from it and (b) make the new drive the main drive.

This gave me the answers: -


Specifically this: -


Note this subtlety: -


Yes, it's all very well booting from the SSD, but no good if it then reverts back to the "spinning rust" that is the Fusion drive ( yes, I know it's a mix of disk and SSD ).

However, the other issue that I faced was that my Bluetooth keyboard ( connected via an external USB Bluetooth dongle ) did NOT allow me to press [Option] during the boot process.

This came to the rescue: -


however, if you use Apple's Bluetooth keyboard, you could find that the system may ignore these inputs and boot normally. While you might assume that these options require a USB keyboard or other physical connection

Thankfully I had a wired USB keyboard, so I used that ….

The article does offer some other guidance: -

If any inputs are being sent via the Bluetooth keyboard before the controllers are active, then they will not be recognized by the system. However, if these inputs are performed after the controllers are activated, then they will be properly read. Therefore, for Bluetooth keyboards, be sure to press the desired key sequences after you hear the boot chimes and not before.

which is nice.

So I'm now booting from USB/SSD and the 2014 Mac Mini is suddenly WAY faster !

Doofus Alert - Using Cloudant queries via cURL

$
0
0
I'm continuing to tinker with Cloudant, and am looking at how I can use indexes ( indices ? ) via the command-line using cURL.

This is what I'm sending: -

curl -X POST -H 'Content-type: application/json' -g $COUCH_URL/$COUCH_DATABASE/_find -d query.json

and this is what I'm seeing: -

{"error":"bad_request","reason":"invalid UTF-8 JSON"}

I checked my query: -

cat query.json 

{
   "selector": {
      "$or": [
         {
            "givenName": "Maggie"
         },
         {
            "givenName": "Lisa"
         }
      ]
   },
   "fields": [
      "givenName",
      "familyName"
   ],
   "sort": [
      {
         "givenName": "asc"
      }
   ]
}

and it looks OK.

Thankfully, this chap had already hit the same: -



Yep, that's exactly where I went wrong ….

I changed my query: -

curl -X POST -H 'Content-type: application/json' -g $COUCH_URL/$COUCH_DATABASE/_find -d @query.json

and … guess what ….

IT WORKED

{"docs":[
{"givenName":"Lisa","familyName":"Simpson"},
{"givenName":"Maggie","familyName":"Simpson"}
],
"bookmark": "g1AAAAA9eJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYozGoIkOGASOSAhkDibb2J6emZqVhYA5ooQDg"}

Can you say "Doofus" ? I bet you can ….

IBM API Connect V2018.3.1 is available

$
0
0
This just in: -

IBM API Connect V2018.3.1 is now available. This update includes important internal development fixes and support for the API Designer as part of the toolkit.

Content

IBM API Connect 2018.x delivers enhanced capabilities for the market-leading IBM API management solution. In addition to the ability to deploy in complex, multi-cloud topologies, this version provides enhanced experiences for developers and cloud administrators at organizations.

The API Connect 2018.3.1 update includes important internal development fixes. In addition, this release includes the API Designer within the toolkit. API developers use the API management functions in the API Designer or the CLI to create draft API definitions for REST and SOAP APIs, or for OAuth provider endpoints that are used for OAuth 2.0 authentication. The API definitions can be configured to add the API to a Product, add a policy assembly flow (to manipulate requests/responses), and to define security options and other settings. APIs can then be tested locally prior to publishing, to ensure they are defined and implemented correctly.

Upgrading to 2018.3.1 makes changes to the underlying data structure of API Connect.  It is highly recommended to have automatic backups configured in your environment and at least one successful backup complete prior to performing this upgrade.

We advise all users of IBM API Connect 2018.1.x and earlier versions of IBM API Connect 2018.2.x to install this update to take advantage of the fixes.

IBM AppConnect and DB2 and SQL0530N

$
0
0
I'm fiddling about with IBM AppConnect Professional ( formerly known as CastIron ), looking at the integration between a flow running on AppConnect, hosted on the IBM Cloud ( nee Bluemix ) and a DB2 database running on a VM on my Mac.

I'll be writing another blog post about the actual integration, including the Secure Gateway later.

Meantime, I wanted to test my flow, which should be monitoring a table for changes.

I did this by inserting a new row into the EMPLOYEE table of the SAMPLE database ( which has been around since I worked on DB2/400 in the mid-90s ).

This is what that table looks like: -

db2 describe "select * from DB2INST1.EMPLOYEE"

 Column Information

 Number of columns: 14

 SQL type              Type length  Column name                     Name length
 --------------------  -----------  ------------------------------  -----------
 452   CHARACTER                 6  EMPNO                                     5
 448   VARCHAR                  12  FIRSTNME                                  8
 453   CHARACTER                 1  MIDINIT                                   7
 448   VARCHAR                  15  LASTNAME                                  8
 453   CHARACTER                 3  WORKDEPT                                  8
 453   CHARACTER                 4  PHONENO                                   7
 385   DATE                     10  HIREDATE                                  8
 453   CHARACTER                 8  JOB                                       3
 500   SMALLINT                  2  EDLEVEL                                   7
 453   CHARACTER                 1  SEX                                       3
 385   DATE                     10  BIRTHDATE                                 9
 485   DECIMAL                9, 2  SALARY                                    6
 485   DECIMAL                9, 2  BONUS                                     5
 485   DECIMAL                9, 2  COMM                                      4


This is what I ran: -

db2 "INSERT INTO DB2INST1.EMPLOYEE   VALUES('000001','Dave','M','Hay','ABC','2122','30/10/1999','Guru',18,'F','30/10/1973',1234.89,1234.89,1221.89)"

which returned: -

DB21034E  The command was processed as an SQL statement because it was not a 
valid Command Line Processor command.  During SQL processing it returned:
SQL0530N  The insert or update value of the FOREIGN KEY 
"DB2INST1.EMPLOYEE.RED" is not equal to any value of the parent key of the 
parent table.  SQLSTATE=23503

which baffled me somewhat.

I dug into the database further: -

db2look -d sample -e -t db2inst1.employee

CREATE TABLE "DB2INST1"."EMPLOYEE"  (
  "EMPNO" CHAR(6 OCTETS) NOT NULL , 
  "FIRSTNME" VARCHAR(12 OCTETS) NOT NULL , 
  "MIDINIT" CHAR(1 OCTETS) , 
  "LASTNAME" VARCHAR(15 OCTETS) NOT NULL , 
  "WORKDEPT" CHAR(3 OCTETS) , 
  "PHONENO" CHAR(4 OCTETS) , 
  "HIREDATE" DATE , 
  "JOB" CHAR(8 OCTETS) , 
  "EDLEVEL" SMALLINT NOT NULL , 
  "SEX" CHAR(1 OCTETS) , 
  "BIRTHDATE" DATE , 
  "SALARY" DECIMAL(9,2) , 
  "BONUS" DECIMAL(9,2) , 
  "COMM" DECIMAL(9,2) )   
 IN "USERSPACE1"  
 ORGANIZE BY ROW; 


-- DDL Statements for Primary Key on Table "DB2INST1"."EMPLOYEE"

ALTER TABLE "DB2INST1"."EMPLOYEE" 
ADD CONSTRAINT "PK_EMPLOYEE" PRIMARY KEY
("EMPNO");



-- DDL Statements for Indexes on Table "DB2INST1"."EMPLOYEE"

SET SYSIBM.NLS_STRING_UNITS = 'SYSTEM';

CREATE INDEX "DB2INST1"."XEMP2" ON "DB2INST1"."EMPLOYEE" 
("WORKDEPT" ASC)

COMPRESS NO 
INCLUDE NULL KEYS ALLOW REVERSE SCANS;
-- DDL Statements for Aliases based on Table "DB2INST1"."EMPLOYEE"

CREATE ALIAS "DB2INST1"."EMP" FOR TABLE "DB2INST1"."EMPLOYEE";


-- DDL Statements for Foreign Keys on Table "DB2INST1"."EMPLOYEE"

ALTER TABLE "DB2INST1"."EMPLOYEE" 
ADD CONSTRAINT "RED" FOREIGN KEY
("WORKDEPT")
REFERENCES "DB2INST1"."DEPARTMENT"
("DEPTNO")
ON DELETE SET NULL
ON UPDATE NO ACTION
ENFORCED
ENABLE QUERY OPTIMIZATION;
...

which showed me the error of my way.

In essence, the WORKDEPT column is actually keyed against a different table: -

db2 describe "select * from DB2INST1.DEPARTMENT"

 Column Information

 Number of columns: 5

 SQL type              Type length  Column name                     Name length
 --------------------  -----------  ------------------------------  -----------
 452   CHARACTER                 3  DEPTNO                                    6
 448   VARCHAR                  36  DEPTNAME                                  8
 453   CHARACTER                 6  MGRNO                                     5
 452   CHARACTER                 3  ADMRDEPT                                  8
 453   CHARACTER                16  LOCATION                                  8

db2 "select * from DB2INST1.DEPARTMENT"

DEPTNO DEPTNAME                             MGRNO  ADMRDEPT LOCATION        
------ ------------------------------------ ------ -------- ----------------
A00    SPIFFY COMPUTER SERVICE DIV.         000010 A00      -               
B01    PLANNING                             000020 A00      -               
C01    INFORMATION CENTER                   000030 A00      -               
D01    DEVELOPMENT CENTER                   -      A00      -               
D11    MANUFACTURING SYSTEMS                000060 D01      -               
D21    ADMINISTRATION SYSTEMS               000070 D01      -               
E01    SUPPORT SERVICES                     000050 A00      -               
E11    OPERATIONS                           000090 E01      -               
E21    SOFTWARE SUPPORT                     000100 E01      -               
F22    BRANCH OFFICE F2                     -      E01      -               
G22    BRANCH OFFICE G2                     -      E01      -               
H22    BRANCH OFFICE H2                     -      E01      -               
I22    BRANCH OFFICE I2                     -      E01      -               
J22    BRANCH OFFICE J2                     -      E01      -               

  14 record(s) selected.


My insert: -

db2 "INSERT INTO DB2INST1.EMPLOYEE   VALUES('000001','Dave','M','Hay','ABC','2122','30/10/1999','Guru',18,'F','30/10/1973',1234.89,1234.89,1221.89)"

is using a DIFFERENT and NON-EXISTENT code ( ABC ) for WORKDEPT.

I changed my insert to: -

db2 "INSERT INTO DB2INST1.EMPLOYEE   VALUES('000001','Dave','M','Hay','A00','2122','30/10/1999','Guru',18,'F','30/10/1973',1234.89,1234.89,1221.89)"

and it all worked: _

DB20000I  The SQL command completed successfully.

Yay !

IBM BPM and Oracle - Bootstrap challenges

$
0
0
So, whilst running the bootstrap command: -

/opt/ibm/WebSphereProfiles/Dmgr01/bin/bootstrapProcessServerData.sh -clusterName AppCluster

I saw this: -

java.lang.Exception: java.lang.reflect.InvocationTargetException
Caused by: java.lang.reflect.InvocationTargetException
Caused by: java.lang.IllegalStateException: Failed to initialize registry
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'handlersMap': Cannot create inner bean 'com.lombardisoftware.server.ejb.persistence.PSDefaultHandler#f4e9076' of type [com.lombardisoftware.server.ejb.persistence.PSDefaultHandler] while setting bean property 'sourceMap' with key [TypedStringValue: value [Branch], target type [null]]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.lombardisoftware.server.ejb.persistence.PSDefaultHandler#f4e9076' defined in class path resource [registry.persistence.xml]: Cannot resolve reference to bean 'dao.branch' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dao.branch' defined in class path resource [registry.persistence.xml]: Instantiation of bean failed; nested exception is java.lang.ExceptionInInitializerError

etc.

So I checked the log: -

view /opt/ibm/WebSphereProfiles/Dmgr01/logs/bootstrapProcesServerData.AppCluster.log

and saw this: -

[11/07/18 12:20:27:801 BST] 00000001 ProviderTrack I com.ibm.ffdc.osgi.ProviderTracker AddingService FFDC1007I: FFDC Provider Installed: com.ibm.ffdc.util.provider.FfdcOnDirProvider@cfa421b1
[11/07/18 12:20:28:470 BST] 00000001 LocalCache    I   CWLLG2155I:  Cache settings read have been from file file:////opt/ibm/WebSphere/AppServer/BPM/Lombardi/process-server/twinit/lib/basic_resources.jar!/LombardiTeamWorksCache.xml.
[11/07/18 12:20:28:768 BST] 00000001 XmlBeanDefini I org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions Loading XML bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml]
[11/07/18 12:20:28:784 BST] 00000001 SQLErrorCodes I org.springframework.jdbc.support.SQLErrorCodesFactory <init> SQLErrorCodes loaded: [DB2, Derby, H2, HSQL, Informix, MS-SQL, MySQL, Oracle, PostgreSQL, Sybase]
[11/07/18 12:20:28:787 BST] 00000001 SQLErrorCodes W org.springframework.jdbc.support.SQLErrorCodesFactory getErrorCodes Error while extracting database product name - falling back to empty error codes
                                 org.springframework.jdbc.support.MetaDataAccessException: Error while extracting DatabaseMetaData; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory, cause: IO Error: The Network Adapter could not establish the connection


I checked that the Oracle 12c listener appeared to be running: -

netstat -aon|grep LISTEN

tcp        0      0 127.0.0.1:1521          0.0.0.0:*               LISTEN      off (0.00/0/0)
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      off (0.00/0/0)
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      off (0.00/0/0)
tcp6       0      0 :::36875                :::*                    LISTEN      off (0.00/0/0)
tcp6       0      0 :::22                   :::*                    LISTEN      off (0.00/0/0)
tcp6       0      0 ::1:25                  :::*                    LISTEN      off (0.00/0/0)
tcp6       0      0 :::5500                 :::*                    LISTEN      off (0.00/0/0)


lsnrctl status listener

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 11-JUL-2018 12:42:50

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date                10-JUL-2018 20:22:19
Uptime                    0 days 16 hr. 20 min. 30 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /home/oracle/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Listener Log File         /home/oracle/app/oracle/diag/tnslsnr/bpm/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=bpm.uk.ibm.com)(PORT=5500))(Security=(my_wallet_directory=/home/oracle/app/oracle/admin/orcl/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "orcl.uk.ibm.com" has 1 instance(s).
  Instance "orcl", status READY, has 1 handler(s) for this service...
Service "orclXDB.uk.ibm.com" has 1 instance(s).
  Instance "orcl", status READY, has 1 handler(s) for this service...
The command completed successfully


Notice the problem ?

Both the netstat and the lsnrctl commands show that the listener is "bound" to 127.0.0.1 rather than to the server's hostname.




lsnrctl stop LISTENER

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 11-JUL-2018 12:44:58

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=bpm.uk.ibm.com)(PORT=1521)))
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 111: Connection refused
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
The command completed successfully


 lsnrctl start LISTENER

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 11-JUL-2018 12:45:09

Copyright (c) 1991, 2016, Oracle.  All rights reserved.

Starting /home/oracle/app/oracle/product/12.2.0/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.2.0.1.0 - Production
System parameter file is /home/oracle/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Log messages written to /home/oracle/app/oracle/diag/tnslsnr/bpm/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=bpm.uk.ibm.com)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=bpm.uk.ibm.com)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date                11-JUL-2018 12:45:09
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /home/oracle/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Listener Log File         /home/oracle/app/oracle/diag/tnslsnr/bpm/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=bpm.uk.ibm.com)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully


I then saw this: -

ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

which I solved via a previous blog post: -

So, the conclusion is that the Listener needs to know / care about the host name of the box: -

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=bpm.uk.ibm.com)(PORT=1521)))

whereas the BPM -> Oracle connectivity needs to know / care about the Oracle Service Name: -

sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Wed Jul 11 15:06:23 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select value from v$parameter where name='service_names';

VALUE
--------------------------------------------------------------------------------
orcl.uk.ibm.com

SQL>
exit

so the JDBC data sources need to look like this: -


This needs to be reflected in the BPM Deployment Environment properties file: -

bpm.de.db.1.databaseName=orcl.uk.ibm.com
bpm.de.db.2.databaseName=orcl.uk.ibm.com
bpm.de.db.3.databaseName=orcl.uk.ibm.com

So now we have this: -

/opt/ibm/WebSphereProfiles/Dmgr01/bin/bootstrapProcessServerData.sh -clusterName AppCluster

Bootstraping data into cluster AppCluster and logging into /opt/ibm/WebSphereProfiles/Dmgr01/logs/bootstrapProcesServerData.AppCluster.log

WASX7357I: By request, this scripting client is not connected to any server process. Certain configuration and application operations will be available in local mode.
'BootstrapProcessServerData admin command completed successfully.....'


Microsoft Excel - Generating test data

$
0
0
We had a requirement to generate some test data within a Microsoft Excel spreadsheet.

similar to this: -


This was the magic invocation: -

="Subscriber Reference #"&ROW()

We merely needed to copy that to the clipboard, and paste it into a nice chunk 'o cells ….

So it's a piece of text concatenated with the row() index, using the ampersand ( & ) symbol.


IBM Cloudant - Another useful source

$
0
0
I've referenced Glynn Bird here before, as the author of the most excellent couchimport, couchbackup and couchrestore tools.

Here's Glynn's personal site: -


including even more lovely CouchDB ( Cloudant ) goodness.

Learning something new every day .... Where's my WAS Admin Console stuff for BPM ?

$
0
0
So today I'm working with a client to deploy IBM BPM Standard 8.6 alongside IBM Master Data Management 11.6, to leverage the Data Stewardship capabilities that the BPM/MDM tie-up offers.

In brief, this means that I'm installing the BPM binaries into an existing installation of WebSphere Application Server Network Deployment 8.5.5.13, which is hosting the MDM environment ( Deployment Manager, Node Agent and Application Server/Cluster ).

However, when I created the BPM Deployment Environment, using BPMConfig.sh, I'm specifying to use DIFFERENT WAS profiles, leaving the MDM stuff well alone.

So, for no obvious reason, I hit a small glitch with my Deployment Manager, having built the BPM Deployment Environment.

Whilst I can start/stop things, run up the Process Center/Portal/Admin UIs etc., I'm unable to see  the BPM-related capabilities within the Deployment Manager ( Integrated Solutions Console ), such as: -




etc. even though the ISC *DID* show BPM as being installed : -


( ignore the version in this example; I've got 8.5.6 on my VM, but 8.6.0 on the customer's environment )

I also checked that the profile was properly augmented: -

cat /opt/ibm/WebSphere/AppServer/properties/profileRegistry.xml
 
<?xml version="1.0" encoding="UTF-8"?><profiles>
    <profile isAReservationTicket="false" isDefault="true" name="Dmgr01" path="/opt/ibm/WebSphereProfiles/Dmgr01" template="/opt/ibm/WebSphere/AppServer/profileTemplates/management">
        <augmentor template="/opt/ibm/WebSphere/AppServer/profileTemplates/BPM/BpmDmgr"/>
    </profile>
    <profile isAReservationTicket="false" isDefault="false" name="AppSrv01" path="/opt/ibm/WebSphereProfiles/AppSrv01" template="/opt/ibm/WebSphere/AppServer/profileTemplates/managed">
        <augmentor template="/opt/ibm/WebSphere/AppServer/profileTemplates/BPM/BpmNode"/>
    </profile>


but to no avail.

More weird, I received something similar to this: -

ServletWrappe E com.ibm.ws.webcontainer.servlet.ServletWrapper service SRVE0014E: Uncaught service() exception root cause /com.ibm.ws.console.servermanagement/addPropLayout.jsp: com.ibm.websphere.servlet.error.ServletErrorReport: javax.servlet.jsp.JspException: Missing message for key "addprops.category.label.businessintegration"

when I attempted to navigate here: -


and the right-hand side of the page merely contained that exception.

Thankfully this post had the answer: -


specifically this bit: -

3) run the command to restore admin console application: [Dmgr profile]/bin/iscdeploy.sh -restore
...

So I did the needful: -

/opt/ibm/WebSphereProfiles/Dmgr01/bin/iscdeploy.sh 

having shut down ALL the JVMs, and magically it fixed the problem.

I'm not sure how I got here, but glad I found a fix.

Thanks, Internet, you rock !



WebSphere Application Server - Testing JDBC connections via Jython and the EJBTimer

$
0
0
As part of a recent engagement, I'd written a simple Jython script to test WAS -> database connections: -

cellID = AdminControl.getCell()
cell=AdminConfig.getid( '/Cell:'+cellID+'/')
for dataSource in AdminConfig.list('DataSource',cell).splitlines():
 print dataSource
 AdminControl.testConnection(dataSource)


However, when I ran this against an IBM Business Process Manager Standard 8.6 environment, I saw this: -

DefaultEJBTimerDataSource(cells/PCCell1/applications/commsvc.ear/deployments/commsvc|resources.xml#DataSource_1228749623069)
WASX7017E: Exception received while running file "/mnt/Scripts/testDataSource.jy"; exception information: com.ibm.websphere.management.exception.AdminException
javax.management.MBeanException
java.sql.SQLException: java.sql.SQLException: Database '/opt/ibm/WebSphereProfiles/AppSrv01/databases/EJBTimers/AppClusterMember1/EJBTimerDB' not found. DSRA0010E: SQL State = XJ004, Error Code = 40,000

which was an annoyance, as I'm not actively using the EJBTimer datasource.

As ever, the solution was simple, rather than testing ALL datasources within the cell, I changed the script to only test the datasources that are specifically part of the BPM Deployment Environment i.e. those that are scoped at cluster level.

cellID = AdminControl.getCell()

cell=AdminConfig.getid( '/Cell:'+cellID+'/')
cluster=AdminConfig.getid("/ServerCluster:AppCluster/")
for dataSource in AdminConfig.list('DataSource',cluster).splitlines():
 print dataSource
 AdminControl.testConnection(dataSource)

cluster=AdminConfig.getid("/ServerCluster:AppCluster/")
for dataSource in AdminConfig.list('DataSource',cluster).splitlines():
 print dataSource
 AdminControl.testConnection(dataSource)

cluster=AdminConfig.getid("/ServerCluster:AppCluster/")
for dataSource in AdminConfig.list('DataSource',cluster).splitlines():
 print dataSource
 AdminControl.testConnection(dataSource)

For a BPM Standard environment, this is good enough………

Also, for the record, it's possible to see the EJBTimer datasources within the WAS Integrated Solutions Console: -




which is nice.

DB2 - Moving databases

$
0
0
This is definitely a Your Mileage May Vary (YMMV) post.

If in doubt, please check with IBM Support *BEFORE* following the steps outlined here …

So I had a requirement to rename some IBM BPM databases from their default names of BPMDB, CMNDB and PDWDB.

This is related to IBM BPM 8.6 on DB2 v11.1.2.2 although the same approach works for DB2 v10.5 as well.

Thankfully DB2 comes with a useful db2relocate tool, as described here:-


So, before doing this for real, I wanted to test it using the SAMPLE database.

This is what I did ….

Switch to the instance owner

su - db2inst1

Create the SAMPLE database

 db2sampl 

  Creating database "SAMPLE"...
  Connecting to database "SAMPLE"...
  Creating tables and data in schema "DB2INST1"...
  Creating tables with XML columns and XML data in schema "DB2INST1"...

  'db2sampl' processing complete.


Validate the current catalog

db2 list db directory

 System Database Directory

 Number of entries in the directory = 4

Database 4 entry:

 Database alias                       = SAMPLE
 Database name                        = SAMPLE
 Local database directory             = /home/db2inst1
 Database release level               = 14.00
 Comment                              =
 Directory entry type                 = Indirect
 Catalog database partition number    = 0
 Alternate server hostname            =
 Alternate server port number         =

Validate the current DB storage

ls -al /home/db2inst1/db2inst1/NODE0000/SAMPLE

total 4
drwx--x--x   8 db2inst1 db2iadm1  114 Jul 23 13:39 .
drwxrwxr-x. 11 db2inst1 db2iadm1 4096 Jul 23 13:36 ..
-rw-------   1 db2inst1 db2iadm1    0 Jul 23 13:36 .SQLCRT.FLG
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000000
drwx--x--x   3 db2inst1 db2iadm1   43 Jul 23 13:37 T0000001
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000002
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000003
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000004
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:39 T0000005


Connect to SAMPLE

db2 connect to sample

   Database Connection Information

 Database server        = DB2/LINUXX8664 11.1.2.2
 SQL authorization ID   = DB2INST1
 Local database alias   = SAMPLE


Check that we can access data

db2 "select * from db2inst1.employee where empno = '000010'"

EMPNO  FIRSTNME     MIDINIT LASTNAME        WORKDEPT PHONENO HIREDATE   JOB      EDLEVEL SEX BIRTHDATE  SALARY      BONUS       COMM       
------ ------------ ------- --------------- -------- ------- ---------- -------- ------- --- ---------- ----------- ----------- -----------
000010 CHRISTINE    I       HAAS            A00      3978    01/01/1995 PRES          18 F   24/08/1963   152750.00     1000.00     4220.00

  1 record(s) selected.

Terminate the connection

db2 terminate

DB20000I  The TERMINATE command completed successfully.

Create a template configuration file

This defines the FROM and TO states

vi sample.cfg

DB_NAME=SAMPLE,SAMPLENE
DB_PATH=/home/db2inst1
INSTANCE=db2inst1


Move the database from the old container to the new container

Note that this works for me because my database has a single partition, and is located in the instance owner's home directory
- This is where YOUR mileage MAY/WILL vary

mv /home/db2inst1/db2inst1/NODE0000/SAMPLE /home/db2inst1/db2inst1/NODE0000/SAMPLENE

Validate the new DB storage layout

 ls -al /home/db2inst1/db2inst1/NODE0000/SAMPLENE/

total 4
drwx--x--x   8 db2inst1 db2iadm1  114 Jul 23 13:39 .
drwxrwxr-x. 11 db2inst1 db2iadm1 4096 Jul 23 14:28 ..
-rw-------   1 db2inst1 db2iadm1    0 Jul 23 13:36 .SQLCRT.FLG
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000000
drwx--x--x   3 db2inst1 db2iadm1   43 Jul 23 13:37 T0000001
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000002
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000003
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:36 T0000004
drwx--x--x   2 db2inst1 db2iadm1   43 Jul 23 13:39 T0000005


Run the db2relocate command to update the catalog

db2relocatedb -f sample.cfg

Files and control structures were changed successfully.
Database was catalogued successfully.
DBT1000I  The tool completed successfully.

Validate the updated catalog

db2 list db directory

 System Database Directory

 Number of entries in the directory = 4


Database 4 entry:

 Database alias                       = SAMPLENE
 Database name                        = SAMPLENE
 Local database directory             = /home/db2inst1
 Database release level               = 14.00
 Comment                              =
 Directory entry type                 = Indirect
 Catalog database partition number    = 0
 Alternate server hostname            =
 Alternate server port number         =


Connect to SAMPLENE

db2 connect to samplene

   Database Connection Information

 Database server        = DB2/LINUXX8664 11.1.2.2
 SQL authorization ID   = DB2INST1
 Local database alias   = SAMPLENE


Check that we can access data

db2 "select * from db2inst1.employee where empno = '000010'"

EMPNO  FIRSTNME     MIDINIT LASTNAME        WORKDEPT PHONENO HIREDATE   JOB      EDLEVEL SEX BIRTHDATE  SALARY      BONUS       COMM       
------ ------------ ------- --------------- -------- ------- ---------- -------- ------- --- ---------- ----------- ----------- -----------
000010 CHRISTINE    I       HAAS            A00      3978    01/01/1995 PRES          18 F   24/08/1963   152750.00     1000.00     4220.00

  1 record(s) selected.


Terminate the connection

db2 terminate

DB20000I  The TERMINATE command completed successfully.

Again, this is definitely a Your Mileage May Vary (YMMV) post.

If in doubt, please check with IBM Support *BEFORE* following the steps outlined here …
Viewing all 1851 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>