On holding jobs rather than job queues

Several months ago, I talked about Debugging RPG Programs in Batch and said that, before you submit your job, you should first hold the job queue.

While this isn’t bad advice, exactly, it’s not what I actually do.

When I am debugging a job in batch, I will submit it with HOLD(*YES). The job then sits on the job queue while I start the service job and debug, and then I can release the job.

Holding the job rather than the job queue reduces the impact I have on other users of the system since no other jobs are waiting for the job queue to be released. It also means that I don’t have to worry about someone else noticing the job queue is held and releasing it before I’m ready to start testing.

Service Programs, signatures and RTVBNDSRC

I have talked about service programs in the past and, at the time, I mentioned that you need a binding source without really saying why. So, to revisit the subject, using the binding language allows you to change your service program without needing to recompile all of the programs that reference it.

It all comes down to signatures.

You can display the signature of a service program using the following command (adjust for your own library and service program, obviously):

DSPSRVPGM SRVPGM(LSCLIB/LSSUTIL) DETAIL(*SIGNATURE)

This will display a screen similar to the one below:

                      Display Service Program Information
                                                                 Display 1 of 1
 Service program  . . . . . . . . . . . . :   LSSUTIL
   Library  . . . . . . . . . . . . . . . :     LSCLIB
 Owner  . . . . . . . . . . . . . . . . . :   QPGMR
 Service program attribute  . . . . . . . :   RPGLE
 Detail . . . . . . . . . . . . . . . . . :   *SIGNATURE

                                  Signatures:

 62C9C0FAA16B653C4359413F3E9CEB81

Now take a look at any program that references the service program:

DSPPGM PGM(LSCLIB/LS0190R) DETAIL(*SRVPGM)

Will something like this:

                          Display Program Information
                                                                 Display 1 of 1
 Program  . . . . . . . :   LS0190R       Library  . . . . . . . :   LSCLIB
 Owner  . . . . . . . . :   QPGMR
 Program attribute  . . :   RPGLE
 Detail . . . . . . . . :   *SRVPGM


 Type options, press Enter.
   5=Display

      Service
 Opt  Program     Library     Activation  Signature
      LSSUTIL     *LIBL       *IMMED      62C9C0FAA16B653C4359413F3E9CEB81
      QRNXIE      QSYS        *IMMED      D8D9D5E7C9C540404040404040404040
      QRNXIO      QSYS        *IMMED      D8D9D5E7C9D640404040404040404040
      QLEAWI      QSYS        *IMMED      44F70FABA08585397BDF0CF195F82EC1

Note that the signqture of the service program needs to match the signature the referencing program expects, and if they don’t you will waste a day dealing with signature violation errors.

So if I want to add a procedure to a service program (which will change the service program’s signature) without having to identify and recompile every program that refers to it, I need to make use of the binder language.

I generally create the binder language source manually but, if you don’t have this, you can use the RTVBNDSRC command. To retrieve the source for service program LSSUTIL in library LSCLIB you would do this:

RTVBNDSRC SRVPGM(LSCLIB/LSSUTIL) SRCFILE(LSCLIB/QSRVSRC) MBROPT(*ADD)

Then go and look at your binder source and you will a listing of all your exported procedures in a format that looks see something like this:

STRPGMEXP PGMLVL(*CURRENT) SIGNATURE(X'62C9C0FAA16B653C4359413F3E9CEB81')
/********************************************************************/
/*   *MODULE      LSSUTIL      LSCLIB      08/02/18  10:49:27      */
/********************************************************************/
   EXPORT SYMBOL("GETCORRECTION")
   EXPORT SYMBOL("GETERRTEXT")
ENDPGMEXP

Just to tidy things up a bit, move the comment lines to the top and get rid of the SIGNATURE keyword (it’s the PGMLVL that you really need) so that the source now looks like this:

/********************************************************************/
/*   *MODULE      LSSUTIL      LSCLIB      08/02/18  10:49:27      */
/********************************************************************/
STRPGMEXP PGMLVL(*CURRENT)
  EXPORT SYMBOL("GETCORRECTION")
  EXPORT SYMBOL("GETERRTEXT")
ENDPGMEXP

Now you want to tell it that this is the old version and the new current version of the service program contains your shiny new procedure. This is how:

/********************************************************************/
/*   *MODULE      LSSUTIL      LSCLIB      08/02/18  10:49:27      */
/********************************************************************/
STRPGMEXP PGMLVL(*CURRENT)
  EXPORT SYMBOL("GETCORRECTION")
  EXPORT SYMBOL("GETERRTEXT")
  EXPORT SYMBOL("GETSHINY")
ENDPGMEXP
/********************************************************************/
STRPGMEXP PGMLVL(*PRV)
  EXPORT SYMBOL("GETCORRECTION")
  EXPORT SYMBOL("GETERRTEXT")
ENDPGMEXP
/********************************************************************/

Then create your service program using:

CRTSRVPGM SRVPGM(LSCLIB/LSSUTIL) MODULE(LSCLIB/LSSUTIL) SRCFILE(LSCLIB/QSRVSRC) SRCMBR(LSSUTIL)

And display the service program signatures:

DSPSRVPGM SRVPGM(LSCLIB/LSSUTIL) DETAIL(*SIGNATURE)

And here’s what you should have:

                      Display Service Program Information
                                                                 Display 1 of 1
 Service program  . . . . . . . . . . . . :   LSSUTIL
   Library  . . . . . . . . . . . . . . . :     LSCLIB
 Owner  . . . . . . . . . . . . . . . . . :   QPGMR
 Service program attribute  . . . . . . . :   RPGLE
 Detail . . . . . . . . . . . . . . . . . :   *SIGNATURE

                                  Signatures:

 2C9C0F7DED8A1897F656D6C9CBB17DDD
 62C9C0FAA16B653C4359413F3E9CEB81

The service program now has two signatures. The new one is at the top of the list and the previous one is below it. What’s more, I can specify PGMLVL(*PRV) as many times as I need to.

When a program uses the service program, it will look down the list until it finds a signature that matches to identify the list of modules it can use. And this means that you can change your service program as much and as many times as you like without needing to worry about any of the programs that uses it.

A final note

You only need to change your binding source when you change the list of exported procedures (generally by adding a new procedure). Changing an existing procedure won’t impact your signatures.

A final final note

When you do add a new procedure, always add it to the bottom.

Copying data between heterogeneous databases with CPYFRMIMPF

Last week I mentioned the CPYTOIMPF command and how easy it makes reformatting data so that it can be handled by other platforms. Not surprisingly the IBM i makes it just as easy to handle data coming back with the CPYFRMIMPF command.

Using the same files as last time, the command looks like this:

CPYFRMIMPF FROMSTMF('/home/PAUL/MarketingReport.csv') TOFILE(MRKRPT01)
           MBROPT(*REPLACE) RCDDLM(*CRLF) FLDDLM(';')

This time, I am copying data from the MarketingReport.csv CSV file into a previously defined (and it must be previously defined) table, MRKRPT. I’m replacing the existing data and using the local and Windows conventions for Record and Field delimiters.

All of this means that, regardless of where a set of data originated, making it available to applications on the IBM i really is a piece of cake.

Copying data between heterogeneous databases with CPYTOIMPF

One of the many things that the IBM i (and its predecessors) has always been good at is exchanging data with other platforms. And one thing that often surprises me is that people often don’t realise just how easy sharing data can be.

How easy? This easy

CPYTOIMPF FROMFILE(MRKRPT01) TOSTMF('/home/PAUL/MarketingReport.csv')
          MBROPT(*REPLACE) RCDDLM(*CRLF) DTAFMT(*DLM)
          RMVBLANK(*TRAILING) FLDDLM(';')

In this example, I cam copying a physical file (table) called MRKRPT01 to a CSV file called MarketingReport.csv in the IFS. I’m using MBROPT(*REPLACE) to ensure the new file overwrites the existing file and RCDDLM(*CRLF) appends a Windows standard carriage return and line feed to the end of each line.

Then things become interesting.

DTAFMT(*DLM) indicates that the data contains delimiter characters, just like a CSV file should. And FLDDLM(';') indicates what the delimiter character should be. By default, this is a comma but I’m in Belgium where standards are different, so I’m using a semi-colon instead.

The RMVBLANK(*TRAILING) removes the trailing blanks from the character fields, which ensures the output is as tidy as it looks.

I now have a perfectly formatted CSV file which can be distributed by FTP, SMTP or by sharing the folder so that end users can find the file themselves. And if you’re using QNTC you can change the TOSTMF parameter to something like ‘/QNTC/path/to/server/folder/file.csv’ in order to put the file wherever you like on the network.

Guaranteed exit code with ON-EXIT

Here’s handy:

The ON-EXIT operation code begins the ON-EXIT section. The ON-EXIT section contains code that runs every time that the procedure ends, whether it ends normally or abnormally. The ON-EXIT section runs under the following conditions:

  • The procedure reaches the end of the main part of the procedure.
  • The procedure reaches a RETURN operation.
  • The procedure ends with an unhandled exception.
  • The procedure is canceled, due to the end of the job or subsystem, or due to an exception message being sent to a procedure higher in the call stack.

By placing your clean-up code, such as deleting temporary files and deallocating heap storage, in the ON-EXIT section, you ensure that it is always run, even if your procedure ends with an unhandled exception, or if it is canceled.

As someone who likes to exit procedures as quickly as possible, this opcode has a great deal of potential when it comes to simplifying the structure of my code, and for reducing the number of things I have to think about. This is always a good thing.

ON-EXIT was introduced in version 7.2 and is well worth investigating.

Converting to CHAR with leading zeros

This is as much a reminder to myself as anything else because the %CHAR built in function has the unfortunate behaviour of stripping leading zeroes when converting a number to a character value. Sometimes I need to keep those leading zeroes, for example when handling elderly date fields.

Fortunately, there’s %EDITC.

This function returns a character result representing the numeric value edited according to the edit code.

The edit codes allow you to do pull all sorts of neat little tricks but, for my purposes, the most useful one is the X edit code which (obscurely) ensures a hexadecimal F sign for positive values. Handily, this means that leading blanks are converted into leading zeroes when converting from numeric to character.

Here’s an example:

      **Free

        // Converting to character with leading zeros
        ctl-opt dftactgrp(*no) actgrp(*new) main(Main);

        dcl-proc Main;
            dcl-pi *n end-pi;

            dcl-s numYear packed(4) inz(2016);
            dcl-s numMonth packed(2) inz(9);
            dcl-s numDay packed(2) inz(1);
            dcl-s charDate char(8) inz;
            dcl-c edit 'X';

            // This converts the numeric fields ino string '20160901'
            charDate = %editc(numYear: edit) +
                       %editc(numMonth: edit) +
                       %editc(numDay: edit);

            // And then I can convert the string into a date and display it
            dsply %date(charDate: *ISO);

            return;

        end-proc;

And if you need a reminder of what the other edit codes are, or what they do, you’re welcome

Better file handling in CL with the CLOSE command

CL is an often undervalued part of the i developer’s toolkit, and one that IBM have been quietly enhancing over the years. It does have some quirks however, the most annoying being that if you are reading a file, once you reach the end of the file there is no way of repositioning the pointer and reading it again.

Actually, there is.

The command you need to know about is the Close Database File (CLOSE) command:

The Close Database File (CLOSE) command will explicitly close a database file that was implicitly opened by a Receive File (RCVF) command run within the same CL program or ILE CL procedure. A file with the same open identifier must have been declared by a Declare File (DCLF) command within the same CL program or ILE CL procedure.

After the CLOSE command is run, the next RCVF command for this file will implicitly reopen the file and read a record from the file.

That last line is the really important piece. After you close the file, the next RCVF command will re-open it and your file pointer is back at the first record.

Here’s an example:

pgm
    dcl &FROMFILE *char 50
    dcl &TOMBR *char 50 value('/qsys.lib/LSCLIB.lib/IMPORTWORK.file/IMPORTWORK.mbr')
    dcl &QSH *char 50
    dclf file(FTPFILES) opnid(LIST)

    /* Merge Original FTP data into Reload File                               */
    ovrdbf file(INPUT) tofile(LSCLIB/QFTPSRC) mbr(GETFULL)
    ovrdbf FILE(OUTPUT) tofile(LSCLIB/QFTPSRC) mbr(FTPLOG)
    ftp 'ftp.customer.be'
    dltovr OUTPUT
    dltovr INPUT
    callsubr MERGEDATA

    /* Merge correction FTP data into Reloqd File                             */
    ovrdbf file(INPUT) tofile(LSCLIB/QFTPSRC) mbr(GETCORR)
    ovrdbf FILE(OUTPUT) tofile(LSCLIB/QFTPSRC) mbr(FTPLOG)
    ftp 'ftp.customer.be'
    dltovr OUTPUT
    dltovr INPUT
    callsubr MERGEDATA

/* Merge retrieved FTP records with missing data into Reload File             */
subr MERGEDATA

    /* List the retrieved files                                               */
    qsh cmd('ls /home/PAUL/FTPIN/ > /home/PAUL/SentFiles.txt')
    clrpfm FTPFILES
    cpyfrmstmf fromstmf('/home/PAUL/SentFiles.txt') +
               tombr('/QSYS.LIB/LSCLIB.LIB/FTPFILES.FILE/FTPFILES.MBR') +
               mbropt(*REPLACE)

    /* Then use this list to read through and import the files                */
    dowhile '1'
        rcvf opnid(LIST)
        monmsg msgid(CPF0864) exec(leave)

        /* Copy and process the stream file                                   */
        chgvar &FROMFILE value('/home/PAUL/FTPIN/' *tcat &LIST_FTPFILES)
        cpyfrmstmf fromstmf(&FROMFILE) tombr(&TOMBR) mbropt(*REPLACE)
        cpyf fromfile(LSCLIB/IMPORTWORK) tofile(LSCLIB/IMPORT) mbropt(*REPLACE) +
             fmtopt(*NOCHK)
        call LSCMRG001R parm(&LIST_FTPFILES)

        /* Delete the stream file once we've finished with it                 */
        chgvar &QSH value('rm /home/PAUL/FTPIN/' *tcat &LIST_FTPFILES)
        qsh cmd(&QSH)

    enddo

    /* Clean up and return                                                    */
    close opnid(LIST)
    qsh cmd('rm /home/PAUL/SentFiles.txt')

endsubr

endpgm

What I am doing here is retrieving a set of files from a customer’s FTP server, and then retrieving the corrections from the same server. After retrieving the files, I execute the MERGEDATA subroutine. This builds a list of the retrieved files and copies it to the FTPFILES physical file.

The DOWHILE loop copies each stream file into a physical file (IMPORT) and then calls program LSCMRG001R to merge the retrieved data into the enterprise application.

Note that once I leave the loop, I close the file. This means that I can reuse the MERGEDATA subroutine as many times as I need to, for whatever selection of files I happen to retrieve.

Using the SQL Translate Function

The SQL TRANSLATE function is worth knowing about if you ever find yourself needing to (for example) fix data from an external feed before you attempt to do anything with it.

The function requires returns a value in which one or more characters in expression have been converted into other characters. You can pass it up to four parameters: the expression (obviously); the to-string; the from-string and a pad character.

You can use the function with only the expression, in which case the characters are simply converted to uppercase, which is a bit pointless. Things become useful when you specify the to-string and from-string:

If more than one argument is specified, the result string is built character by character from expression, converting characters in from-string to the corresponding character in to-string. For each character in expression, the same character is searched for in from-string. If the character is found to be the nth character in from-string, the resulting string will contain the nth character from to-string. If to-string is less than n characters long, the resulting string will contain the pad character. If the character is not found in from-string, it is moved to the result string unconverted.

Here’s an example.

I’ve knocked together a sample table with two columns — a code and a description — to represent a data feed. The description can contain accented characters, but we don’t want to see these in the target system.

The unadjusted table looks like this:

IDX   CODE    DESCRIPTION  
  1   FAC01   Façile       
  2   UML01   Umlaüt       

To get rid of those accents in the description, this query…

select                                                          
 IDX, CODE, translate(DESCRIPTION, 'eeaaocuouee', 'éèâàôçûöüëê')
from FEED                                                       

… returns this result:

IDX   CODE    TRANSLATE 
  1   FAC01   Facile    
  2   UML01   Umlaut    

If the from-string is longer than the to-string, the pad character is used to pad the to-string to ensure the lengths match. So, in the above example,

This query:

select translate(CODE, 'xx', '01') from FEED

… is equivalent to this one…

select translate(CODE, 'x', '01', 'x') from FEED

I’m not entirely convinced of the value of this argument. It is probably bot safer and clearer to simply ensure your to-string and from-string are the same length.

Interpreting the DSPPGMREF file usage

I mentioned the DSPPGMREF command a couple of weeks ago. This command allows you to generate a list of system objects used by specified programs and is particularly handy when you want to see which programs are doing what to a specific file. The only quirk is that the file usage field is a two digit field that needs to be interpreted.

This field is a decimal representation of a three-digit binary field for which the rightmost column (1) indicates the file is used for input, the middle column (2) indicates the file is used for output and the leftmost column (4) indicates the file is used for update. Combining these columns gives you a single digit (decimal) number that fully describes how the file is used.

Interpreting this field is simple enough, but I do have to keep looking up what the values mean. Since I am directing the output to a file and using SQL to query this file, I am also letting the SQL bitwise functions interpret the file usage for me.

At it’s simplest, this query:

select WHLIB, WHPNAM, WHTEXT, WHFNAM, WHLNAM, WHSNAM, WHFUSG,
       bitand(WHFUSG, 1) as Input,                           
       bitand(WHFUSG, 2) as Output,                          
       bitand(WHFUSG, 4) as Update                           
FROM LSCLIB/CROSSREF WHERE WHSNAM like '%LSCTST%'              

… gives me a result that looks like this:

Library     Program     Object       Library      Source       File Usage  INPUT  OUTPUT  UPDATE
                        Referenced                File Name                                     
LSCLIB      LSCT100     LSCTSTP      LSCLIB       LSCTSTP           3        1       2       0  
LSCLIB      LSCT100N    LSCTSTP      LSCLIB       LSCTSTP           3        1       2       0  
LSCLIB      LSCT110N    LSCTSTL1     LSCLIB       LSCTSTL1          1        1       0       0  
LSCLIB      LSCT120     LSCTSTL1     LSCLIB       LSCTSTL1          5        1       0       4  
LSCLIB      LSCT120N    LSCTSTL1     LSCLIB       LSCTSTL1          5        1       0       4  

If the file is used for input, the INPUT column contains a 1. If the file is used for output, the OUTPUT column contains a 2. And, if the file is updated, the UPDATE column contains a 4.

This is immediately more readable than just looking at the raw File Usage field, but it can be simplified further by using some case statements:

select WHLIB, WHPNAM, WHFNAM, WHLNAM, WHSNAM, WHFUSG,              
  case bitand(WHFUSG, 1) when 1 then 'Yes' else 'No' end as Input, 
  case bitand(WHFUSG, 2) when 2 then 'Yes' else 'No' end as Output,
  case bitand(WHFUSG, 4) when 4 then 'Yes' else 'No' end as Update 
FROM LSCLIB/CROSSREF WHERE WHSNAM like '%LSCTST%'                    

… gives me a result that looks like this:

Library     Program     Object       Library      Source       File Usage  INPUT  OUTPUT  UPDATE
                        Referenced                File Name                                     
LSCLIB      LSCT100     LSCTSTP      LSCLIB       LSCTSTP           3       Yes    Yes     No   
LSCLIB      LSCT100N    LSCTSTP      LSCLIB       LSCTSTP           3       Yes    Yes     No   
LSCLIB      LSCT110N    LSCTSTL1     LSCLIB       LSCTSTL1          1       Yes    No      No   
LSCLIB      LSCT120     LSCTSTL1     LSCLIB       LSCTSTL1          5       Yes    No      Yes  
LSCLIB      LSCT120N    LSCTSTL1     LSCLIB       LSCTSTL1          5       Yes    No      Yes  

On terminology

No, I’m not going to revive the perennial — and pointless — argument about what we call the platform. It’s an IBM Power System running the IBM i operating system and there really is nothing else to discuss here.

Beyond this, however, there is terminology that we use which is at odds with that used elsewhere in the IT industry. This can, and does, create unnecessary communication barriers when talking to colleagues. It can also come back to haunt us when we try to explain that the i on Power is as modern and as capable — if not more so — than whatever shiny the tech press has alighted on this week.

With that in mind, I propose that we stop talking about files, records and fields.

Like it or not, the terminology that comes with SQL has become the language people use when discussing relational databases and those of us working with DB2 for i need to make clear that we know what we are talking about before we start discussing the advantages we enjoy.

So here’s a handy guide:

  • Don’t say library, say schema
  • Don’t say (physical) file, say table
  • Don’t say record, say row
  • Don’t say field, say column
  • Don’t say logical file, say index (or view)

Even if you have avoided SQL and stuck with native IO, using modern terminology will help you. It will ensure that you and your colleagues are using the same words when talking about the same things. Updating your terminology will also underline the fact that we are using a modern, fully featured relational database with more than enough power to remain relevant for many years to come.

Happy New Year.