Search This Blog

Monday, September 30, 2019

Converting an array to a CSV file and vice-versa

The OpenAF's included CSV object provides some basic functionality for handling CSV files. Althought not intended for handling "huge" CSV files some of these functions are usually handy to quickly convert javascript arrays (composed of javascript map entries) to and from a CSV file.

The main CSV functions to use are: CSV.fromArray2File and CSV.fromFile2Array.

Converting from an Array to a CSV file

Here is an example:

// Let's prepare a sample array with all the file info from the current folder
var myArray = io.listFiles(".").files;

// Let's create a CSV object instance
var csv = new CSV();

// Now simply write a file based on the existing array
csv.fromArray2File(myArray, "mycsv.csv");

Now if you check the newly created (or overwritten) CSV file it will look similar to:

"isDirectory","isFile","filename","filepath","canonicalPath","lastModified","createTime","lastAccess","size","permissions"
"false","true","opack","./opack","/openaf/opack",1569823332000,1569823332000,1569823332000,353,"xrw"
"false","true","openaf","./openaf","/openaf/openaf",1569823332000,1569823332000,1569823332000,342,"xrw"

which maps the original array content:

> myArray
[{
    isDirectory: false,
    isFile: true,
    filename: "opack",
    filepath: "./opack",
    canonicalPath: "/openaf/opack",
    lastModified: 1569823332000,
    createTime: 1569823332000,
    lastAccess: 1569823332000,
    size: 353,
    permissions: "xrw"
}, {
    isDirectory: false,
    isFile: true, 
    filename: "openaf",
    filepath: "./openaf",
    canonicalPath: "/openaf/openaf",
    lastModified: 1569823332000,
    createTime: 1569823332000,
    lastAccess: 1569823332000,
    size: 342,
    permissions: "xrw"
}
...

If you examine it carefully the boolean values, from the javascript map, were converted to strings. This is expected since the CSV format will only map javascript Number and String trying to convert any other unknown type to String.

Note: Javascript maps can contain sub-maps and sub-arrays. These won't be correctly converted to CSV.

Specific array fields

As an optional third argument of the CSV.fromArray2File function you can also limit the fields used or provide a specific order to use:

csv.fromArray2File(myArray, "mycsv.csv", ["canonicalPath", "size"]);

Converting a CSV file into an array

Using the previous example where we converted a javascript array into a CSV file, it's easy to convert back to an array:

var myNewArray = csv.fromFile2Array("mycsv.csv");

But, as previously warned, some types will be represented as strings:

> myNewArray
[{
  isDirectory: "false",
  isFile: "true",
  filename: "opack",
  filepath: "./opack",
  canonicalPath: "/openaf/opack",
  lastModified: "1569823332000",
  createTime: "1569823332000",
  lastAccess: "1569823332000",
  size: "353",
  permissions: "xrw"
}, {
...

Nevertheless these functions provide a quick and easy way to convert javascript arrays to and from CSV files which can be useful when you are trying to use the javascript data with other tools.

Wednesday, September 25, 2019

Creating a ZIP file

OpenAF includes a specific plugin to group all the ZIP related functionality trying to make it easy to use it.

In this case we will show how easy is to create a ZIP file. We have two local files that we want to add to a new zip file: myclass.java and myclass.class.

plugin("ZIP");

zip.putFile("src/myclass.java", "myclass.java");
zip.putFile("bin/myclass.class", "myclass.class");

zip.generate2File("myclass.zip", { compressionLevel: 9 });
zip.close();

If you look carefully we are taking two local files, from the same folder, but storing them in different folders inside the ZIP file (e.g. zip.putFile(target, source)).

The ZIP file will get written when you call the zip.generate2File(aFilePath, mapOptions) function. Besides creating the "myclass.zip" we are also specifying that we want the maximum compression possible (e.g. level 9).

You can explore the contents of this newly created zip file using zip.list(aZipFile) function:

> plugin("ZIP");
> var zip = new ZIP();
> zip.list("myclass.zip");
+--------------------+-----------------+-------------------+
|  src/myclass.java: | compressedSize: | 17                |
|                    |           size: | 15                |
|                    |            crc: | 3010494688        |
|                    |           name: | src/myclass.java  |
|                    |        comment: | null              |
|                    |           time: | 1569369872000     |
+--------------------+-----------------+-------------------+
| bin/myclass.class: | compressedSize: | 18                |
|                    |           size: | 16                |
|                    |            crc: | 626008697         |
|                    |           name: | bin/myclass.class |
|                    |        comment: | null              |
|                    |           time: | 1569369872000     |
+--------------------+-----------------+-------------------+

Note: for bigger ZIP files you can use zip.streamPutFile*

Saturday, September 21, 2019

Handling failure on REST calls

When calling other services through REST you always need to antecipate failure. It might be a network issue or the failure might be due to the service being down.

The default behaviour

In OpenAF when making a REST call if it fails it will look similar to this:
var res = $rest().get("http://127.0.0.1:12345");
if (isDef(res.error)) {
    logErr("There was an error contacting the service: " + res.error);
} else {
    // Process the result
}
Showing $rest() function returning error
Nevertheless you can add the throwExceptions flag so you can handle it differently:
try {
    var res = $rest({ 
        throwExceptions: true
    })
    .get("http://127.0.0.1:12345");

    // Process the result
} catch(e) {
    logErr("There was an error contacting the service: " + String(e));
}
Showing $rest() function with throwExceptions flag set to true

Another simpler, more elegant, way

But your code starts getting full of exception handling and you just wanted a non-critical information for which a default reply it's okay. Let's say you have a service that returns an array of favourite fruits given a user.
The expected behaviour when everything is working would be:
Showing $rest() function calling a service a returning a user and a fruits array
So your code could look like this:
addFavouriteFruitsToDashboard(
    $rest()
    .post("http://127.0.0.1:12345/getFruits", { user: currentUser })
);
The only problem is if the service fails. Then you will have to either check the result or try/catch the function addFavouriteFruitsToDashboard call. But the $rest() shortcut can handle that for you with the option default. This option let's you define a default map in case something goes wrong. You still get the error entry but you can choose to handle it or note.
The previous code now can look more like this:
addFavouriteFruitsToDashboard(
    $rest({
        default: { 
            user  : currentUser, 
            fruits: []
        }
    })
    .post("http://127.0.0.1:12345/getFruits", { user: currentUser })
);
So in case of error, you will always have, at least, an empty fruits array. Because calling the _$rest()_function now with an error on the server (like turning it off) results in:
Showing $rest() function calling a service a returning a user, an empty fruits array and a error

By the way...

By the way, if you want to test it yourself and need a quick dirty rest service you just have to run the following lines:
ow.loadServer();

var hs = ow.server.httpd.start(12345);
ow.server.httpd.route(hs, { 
    "/getFruits": r => {
        return ow.server.rest.reply("/getFruits", r, 
            (idxs, data, req) => { 
                return { 
                    user: data.user, 
                    fruits: [ 
                        'banana', 
                        'apple', 
                        'orange' 
                    ] 
                } 
            }
        )
    }
});

log("I'm ready!");
ow.server.daemon();

Friday, September 20, 2019

Testing a TCP port

When performing TCP connections you always have to deal with the eventual connection failure. For example, if you write a script that will connect to a specific server, you should deal with the issue that when executing the script might not have connectivity to the desire target.

So how to handle these events? The typical answer is waiting for the connection error exception and deal with it. For example:

try {
    // make the TCP call
} catch(e) {
    // handle the exception
}

You should always handle the exception but you can avoid even going into the try/catch block with a quick TCP connectivy check:

ow.loadFormat();

var host = "my.service.host";
var port = 1234;

if (ow.format.testPort(host, port)) {
    var result;
    try {
        result = callService(host, port, params);
        // process result
    } catch(e) {
        logErr("Problem calling service on " + host + ":" + port);
    }
} else {
    logErr("No connectivity to " + host + ":" + port);
}

The ow.format.testPort function allows for quick socket connection tests. By default it timeouts after 1.5 seconds but you can change that using a third parameter:

// Wait 5 seconds before declaring farAway service not reachable (false)
ow.format.testPort(farAwayHost, farAwayPort, 5000);

Wednesday, September 18, 2019

Quick OpenAF streams conversion

When using input or output streams in OpenAF you might want to quickly "convert them" to string or array of bytes and vice-versa. Usually for testing proposes but you can find the following functions handy any time.

Converting from String/Bytes to an Input Stream

If you need to get a string or an array of bytes into an input stream you can use af.fromBytes2InputStream or af.fromString2InputStream. Here is an example:

ioStreamReadLines(af.fromString2InputStream("Hello World!!\n"), (line) => {
    print(line);
});

Creating and converting an OutputStream to String/Bytes

If you wanna check what it's being output to a given output stream you can create an "in-memory" OutputStream (actually a java.io.ByteArrayOutputStream):

var ostream = af.newOutputStream();
ioStreamCopy(ostream, af.fromString2InputStream("Hello World!\n"));

// Converting to string
print(ostream.toString());  

// Converting to an array of bytes
//var b = ostream.toByteArray(); 

If you don't wanna copy from another input stream and just want to set the contents of the output stream:

var ostream = af.fromString2OutputStream("Hello world!\n");

Which is prettry much equivalent to the previous example. Of course there is also a af.fromBytes2OutputStream function.

Converting and InputStream to String/Bytes

Ok, now we have an input stream that we just wanna check it's contents on the form of an array of bytes:

var istream = io.readFileStream("myfile.txt");
var contents = af.fromInputStream2Bytes(istream);

Again, a af.fromInputStream2String is also available.

Introduction to streams in OpenAF

Some of the functions you find in OpenAF have the "stream version" and the "non stream version". For example: io.readFileBytes and io.readFileStream; io.writeFileBytes and io.writeFileStream; HTTP.getBytes and HTTP.getStream; etc.

So what's the difference?

Roughly the non stream version will get the relevant contents to memory from another source or write them from memory to another source. It's fast but the bigger the contents, the bigger memory you will need.

The stream version provides an object that let's other functions retrieve small sub-sets of the content (from another source or memory) while handling that content.

For example, reading a file:

var contents = io.readFileBytes("myFile.bin");

// vs

var istream = io.readFileStream("myFile.bin");

The variable contents will hold all the bytes on the myFile.bin file while the variable istream will be an object allowing other functions to read parts of myFile.bin.

Input and output streams

These OpenAF streams are actually nothing more than the Java's InputStream and OutputStream objects. Input for streams that let you read content from another source and Output for streams that let you write content to some other source.

In the file example you have an input stream: istream. So how can you now, for example, write the contents you get from the input stream to another file?

var istream = io.readFileStream("myFile.bin");
var ostream = io.writeFileStream("myNewFile.bin");

ioStreamCopy(ostream, istream);

Usually these stream objects need to be closed once used by calling the method .close(). But this OpenAF's ioStreamCopy does everything for you.

There are more methods like ioStreamRead, ioStreamReadLines, ioStreamReadBytes, ioStreamWrite, ioStreamWriteBytes, etc…

Handling input streams

Let's assume that you are reading a huge csv file and you want to process each line:

var istream = io.readFileStream("mycsv.csv");
ioStreamReadLines(istream, (line) => {
    // Handle the line

    return result;
});
istream.close();

In this example the function ioStreamReadLines will read contents from the istream input stream until it finds the defined separator, new line ('\n') by default in this case. Then it calls the callback function with one argument: the entire line. When it ends the istream is closed since it's no longer needed.

What about the returned result? One of the benefits is that you don't have to read to the end of the stream/file. You can stop at any time. If 'result = true' the function ioStreamReadLines will stop reading from the input stream and return.

Note: there is an argument on the ioStreamReadLines function to provide a different separator than '\n'. Check "help ioStreamReadLines" on an openaf-console.

Sunday, September 15, 2019

Function profiling

When coding/scripting there are two important things to do depending on the use the code is going to have: debugging and profiling.

In OpenAF there is an included mini test library that it's actually also used to perform the OpenAF's own build automated tests: ow.test.

In this article we are going to show how to use the ow.test to perform quick code profiling.

Example

Let's take a simple exercise. Imagine you have a 10K entries array and you don't know what to use: $from(array).select() or array.map().

Let's create the array and load ow.test:

ow.loadTest();
var ar = [];
for(let i = 0; i < 10000; i++) {
    ar.push(i);
}

Let's create now the sample function for $from:

function fromTest() {
    $from(ar)
    .select((r) => {
        return r + 1;
    });
}

And map:

function mapTest() {
    ar
    .map((r) => {
        return r + 1;
    });
}

We have the array and we have the test functions now let's use ow.test to understand how each function behaves during 400 executions:

for(let i = 0; i < 400; i++) {
    ow.test.test("$from", fromTest);
    ow.test.test("map", mapTest);
}

Now for the results:

print("Results:\n" + printMap(ow.test.getProfile()));
print("Averages:\n" + printMap(ow.test.getAllProfileAvg()));

So, clearly, in this case, map won to the $from. The average execution time is better and even the minimum time for $from is worse than the max for map.

Of course each case is different and that's the reason that there never is a "silver bullet" solution. You just have to test it.

Thursday, September 12, 2019

gzip/gunzip functionality in OpenAF

There are built-in functionality, in OpenAF, to apply gzip or gunzip to files or arrays of bytes. The functionality is available on the object io with the functions: io.gzip, io.gunzip, io.readFileGzipStream and io.writeFileGzipStream. They divide into two ways of using it:

To/From an array of bytes

The simplest way is to gzip/gunzip to/from an array of bytes:

var theLog = io.gunzip(io.readFileBytes("abc.log.gz"));
var gzLog  = io.writeFileBytes("new.log.gz", io.gzip(theLog));

The only issue is that the array of bytes (e.g. theLog, gzLog) will be kept in memory.

To/From streams

To address larger sets of bytes without occuring into memory spending, specially for large files, the are the stream based functions:

var rstream = io.readFileGzipStream("abc.log.gz");
// use the rstream and change it's content
// when ready to write just create a write stream
var wstream = io.writeFileGzipStream("new.log.gz");
// and use it to write to the new gzip file

Compress/Uncompress

To help store big javascript objects (even in memory) OpenAF provides two functions: compress and uncompress.

Of course the gains will be greater the bigger, and more compressable, the object is. Let's see some examples:

> var out = compress(io.listFiles("."));
> out.length
1959
> stringify(io.listFiles("."), void 0, 22).length
11674
> stringify(uncompress(out), void 0, "").length
11674

Of course objects are not stored in memory as their stringify version but, you get the idea. It's specific for cases when you need to keep an object in memory that you won't be acesssing on the medium/long term of the execution of your OpenAF script. Of course, it's also easy to save/load from a binary file:

> io.writeFileBytes("myObj.gz", compress(io.listFiles(".")));
> var theLog = uncompress(io.readFileBytes("myObj.gz"));

How to upload/download files from a Window/SMB share folder

The OpenAF's SMB plugin (or the SaMBa plugin, or the Server Message Block plugin) can be added by installing the "plugin-smb" oPack. It allows scripts to upload, download, remove and list files on a remote Windows/Samba share (depending on the user's permission, of course).

(note: it supports SMBv3)

How to install it

Just execute:

opack install plugin-smb

How to use it

After installing you need to include the SMB plugin on your code:

plugin("SMB");

Now you can create any javascript object instance to access a SMB URL on a given domain with the corresponding user and password:

var smb = new SMB("smb://my.server/myShare", "mydomain", "myuser", "mypassword");

Listing files on a share folder

To list the files on a specific folder you can use the .listFiles function:

> smb.listFiles("smb://my.server/myShare/aFolder");
+--------+-----+---------------+--------------------------------------------------------+
| files: | [0] |     filename: | a/                                                     |
|        |     |     filepath: | smb://my.server/myShare/aFolder/a/                     |
|        |     |         size: | 0                                                      |
|        |     |  permissions: | <all permissions>                                      |
|        |     | lastModified: | 1552671846146                                          |
|        |     |   createTime: | 1302948583597                                          |
|        |     |  isDirectory: | true                                                   |
|        |     |       isFile: | false                                                  |
+--------+-----+---------------+--------------------------------------------------------+
|        | [1] |     filename: | Thumbs.db                                              |
|        |     |     filepath: | smb://my.server/myShare/aFolder/Thumbs.db              |
|        |     |         size: | 99328                                                  |
|        |     |  permissions: | <all permissions>                                      |
|        |     | lastModified: | 1555318143997                                          |
|        |     |   createTime: | 1379322293251                                          |
|        |     |  isDirectory: | false                                                  |
|        |     |       isFile: | true                                                   |
+--------+-----+---------------+--------------------------------------------------------+

Notice that the filepath entry is the prebuilt URL you can use to list another sub-folder, for example.

Downloading a file from a share folder

To download a file you just need to provide the SMB URL:

> smb.getFile("smb://my.server/myShare/aFolder/Thumbs.db", "theOtherThumbs.db");
99328

It will return you the number of bytes transfered.

You can also use .getFileBytes to handle the download content as an internal array of bytes instead of saving to a file and .getInputStream to receive a stream to handle the download. Check the corresponding help information on the openaf-console.

Uploading a file to a share folder

To upload a file it's similar to the download, reversing the arguments for the .putFile function:

> smb.putFile("readme.txt", "smb://my.server/myShare/aFolder/readme.txt");
1235

It will also return you the number of bytes transfered.

As with the .getFile function you also have the functions .writeFileBytes to upload content directly from an internal array of bytes instead of a local file and .writeFileStream to upload directly from a stream. Check the corresponding help information on the openaf-console. There is also an extra argument to append content to an existing file on the remote share folder.

Removing a file from a remote share folder

To delete a file from a remote share folder just use the .rm function:

smb.rm("smb://my.server/myShare/aFolder/readme.txt")

Tuesday, September 10, 2019

Getting a DB table's columns/fields

Ever wonder how the OpenAF-console can do commands like "dsql" to list all the columns of a table (or a database object) and quickly? It's all on the JDBC metadata for any query over all columns. No need to go to the specific database column catalog.

Getting the metadata

For each of the tables identified on the catalog of the corresponding database:

  1. Performe a simple query to access the table's metadata BUT getting the JDBC ResultSet object:
var db = new DB(jdbcURL, jdbcUsername, jdbcPassword);
var rs = db.qsRS("select * from \"" + schemaName + "\".\"" + tableName + "\"");

Note: If you want to use a specific JDBC driver class just replace by "new DB(jdbcDriver, jdbcURL, jdbcUsername, jdbcPassword)" otherwise it will try to guess from the jdbcURL for known drivers.

  1. Get the number of columns from the JDBC ResultSet object:
var numberOfColumns = rs.getMetadata().getColumnCount();
  1. Get the columns' metadata:
var columns = []; 
for(let ci = 1; ci <= numberOfColumns; ci++) { 
    columns.push({ 
        name : rs.getMetaData().getColumnName(ci), 
        type:  rs.getMetaData().getColumnTypeName(ci).toUpperCase(), 
        size : rs.getMetaData().getColumnDisplaySize(ci),
        scale: rs.getMetaData().getScale(ci) 
    })
};
  1. (please) Close the result set object and any possible transaction (e.g. because PostgreSQL):
rs.close();
db.rollback();

And there you have it, a columns array where each map entry has the necessary column information of the specific table:

> table columns
     name      |  type   |size|scale
---------------+---------+----+-----
ID             |NUMERIC  |11  |0
SOME           |VARCHAR  |35  |0
TKEY           |VARCHAR  |512 |0
VALUE          |NUMERIC  |25  |6
CREATED_BY     |VARCHAR  |64  |0
CREATED_DATE   |TIMESTAMP|22  |0
MODIFIED_BY    |VARCHAR  |64  |0
MODIFIED_DATE  |TIMESTAMP|22  |0
[#8 rows]

What about the list of tables?

Ok, for the list of tables you might really need to go the database's catalog. Here are some examples:

Oracle

To get all tables in a specific Oracle schema:

var tables = mapArray(db.qs("select owner || '.' || table_name from all_tables where owner = ?", ['mySchema'], true).results, [ "table_name" ]);

PostgreSQL

To get all tables in a specific PostgreSQL schema:

var tables = mapArray(db.qs("select table_schema || '.' || table_name from information_schema.tables where table_schema = ?", ['mySchema'], true).results, [ "table_name" ]);

H2

To get all tables in a specific H2 schema:

var tables = mapArray(db.qs("select table_schema || '.' || table_name from information_schema.tables where table_schema = ?", ['mySchema'], true).results, [ "table_name" ]);

Note: Yes, it's equal to PostgreSQL

Using ElasticSearch with nAttrMon

nAttrMon captures monitoring inputs to be validated and output to where deem necessary. But you start having more than one instance or you are monitoring a lot of inputs you start having the need to build specific dashboards, visualizations, etc.
Additionally althougth nAttrMon lets you store input history, it's limited since it's not intended to be used for longer than a couple of days.
For all these challenges output and use ElasticSearch & Kibana provide you the answer:
  • Elasticsearch let's you store more than just a couple of days of nAttrMon's input
  • Kibana let's you visualize the data anyway you want from ElasticSearch.

How to set it up

So how to get nAttrMon to output to ElasticSearch? Simply create a new output (example in YAML):
output:
  name         : Output ES values
  chSubscribe  : nattrmon::cvals
  waitForFinish: true
  onlyOnEvent  : true
  execFrom     : nOutput_ES
  execArgs     : 
    url      : http://127.0.0.1:9200
    #user     : nouser
    #pass     : nopass
    #considerSetAll: false 
    #funcIndex: "ow.ch.utils.getElasticIndex('nattrmon-attrs',\"yyyy.ww\")" 
    ## note: Check getElasticIndex function, If a specific format is needed
    ## you can provide it as aFormat (see ow.format.fromDate)"
    funcIndex: "ow.ch.utils.getElasticIndex('nattrmon-attrs', 'yyyy.ww')"
    #index: nattrs
    stampMap : 
      environment: EnvA
      region     : EU
    #include  :
    #  - test/test 1
    # exclude  :
In most of the cases you only need to specify the ElasticSearch URL. But there are more parameters:

Exec arguments Description
index The ElasticSearch index where to store the current values (channel nattrmon::cvals).
funcIndex In most of the cases you want to use a different ElasticSearch index on different times. You can specify a function for that.
considerSetAll The nOutput_ES is based on channel subscription (chSubscribe) so it can potential receive a setAll operation. In case you use nAttrMon channel buffers you need to set this argument to true. Otherwise you are probably fine with false.
stamp Since you might have several nAttrMon's instances dumping attribute values to the same ElasticSearch index you can "stamp" each output entries with specific entries (e.g. in the example environment and region)
include You can specify that only some attributes' values will be sent to ElasticSearch.
exclude Or you can specifiy that only some attributes' values shouldn't be sent to ElasticSearch.
user The user credential if needed.
pass The password credential if needed.

Since in most ElasticSearch configuration cases it's advisable to have different indexes per time you can provide nAttrMon with a function on funcIndex. Using the helper function ow.ch.utils.getElasticIndex:
ow.ch.utils.getElasticIndex('nattrmon-attrs', 'yyyy.ww')

How the output looks in ElasticSearch?

For each attribute value change a new entry will be created in ElasticSearch:


The key is id and it's a hash. Then you will have the name of the attribute and the date it was checked. The value will be in a key with the same name as the attribute. If the attribute has categories the '/' will be converted to a ''. In the example above: _Random_Number and Random_Dice. The stamp keys will also, of course, be part of the ElasticSearch entry.

And the warnings?

To add warnings just create a new similar output changing it's name, the ElasticSearch index where the warnings will be kept and replace on the chSubscribe to use nattrmon::warnings instead of nattrmon::cvals.
Example of an OpenAF channel accessing the ElasticSearch's nattrmon-warns index


The warnings instead of the name of the attribute have the title of the warning, level (e.g. High, Medium, Low, Info, Closed), the description, the date of the last update and date of creation. The stamp map will also be part of the ElasticSearch entry as expected.
Example of a Kibana dashboard using sample warnings from nAttrMon

Sunday, September 8, 2019

Quickly build a REST service in OpenAF

Whenever you need a REST service on a rush you can have a fully functional REST service with OpenAF in a couple of minutes.

1. The function(s)

Let's start with a sample function in OpenAF that you need it to be available as a REST service:

function addNumbers(inputMap) {
    inputMap   = _$(inputMap).isMap().default({});
    inputMap.a = _$(inputMap.a).default(0);
    inputMap.b = _$(inputMap.b).default(0);

    return {
        a: Number(inputMap.a),
        b: Number(inputMap.b),
        res: Number(inputMap.a) + Number(inputMap.b)
    }
}

Advice: it's easier if it receives a map and returns a map.

Now save the addNumbers function in mylib.js.

2. Install some helper oPacks

$ opack install ojob-common
$ opack install openaf-templates

3. Setup the main oJob

Copy the openaf-templates/ojobs/restServices/restServices.yaml to your current folder, together with mylib.js from step 1, with the name main.yaml.

$ cp openaf-templates/ojobs/restServices/restServices.yaml main.yaml

Now let's edit the main.yaml:

  1. Change the piddir line to "piddir: &PIDDIR myService.pid"
  2. On the "Prepare my service" job change to something like this:
  - name: Prepare my service
    to  : REST Service
    args: 
      uri       : /add     # That's your new URI
      port      : *PORT

      # Your code for the GET verb
      execGET   : |
        loadLib("mylib.js");
        return addNumbers(request.params);

      # Your code for the POST verb
      execPOST  : |
        loadLib("mylib.js");
        return addNumbers(data);
      execPUT   : "return { result: 0 }"
      execDELETE: "return { result: 0 }"

You can quickly test it by executing:

$ ojob main.yaml

Now, on your favourite REST client, execute something similar to:

$ curl "http://127.0.0.1:8090/add?a=5&b=5"
{"a":5,"b":5,"res":10}
$ curl -XPOST "http://127.0.0.1:8090/add" -d "{'a':1,'b':3}" -H "Content-Type: application/json"
{"a":1,"b":3,"res":4}

It's working!

Let's docker it

Create a Dockerfile:

FROM openaf/openaf-ojobc

COPY mylib.js /openaf/mylib.js
COPY main.yaml /openaf/main.yaml

Build it:

$ docker build . -t myservice

Start it:

$ docker run --rm -ti -p 8090:8090 myservice

Test it:

$ curl "http://127.0.0.1:8090/add?a=5&b=5"
{"a":5,"b":5,"res":10}
$ curl -XPOST "http://127.0.0.1:8090/add" -d "{'a':1,'b':3}" -H "Content-Type: application/json"
{"a":1,"b":3,"res":4}

It's that easy.

Saturday, September 7, 2019

Quick XML to/from JSON conversion

At a first glance XML and JSON have some similiarities but XML is actually document oriented while JSON is data-oriented. Nevertheless for some cases, specially when processing in Javascript, it's useful to convert XML into JSON and JSON into XML. Let's check some basic examples.

Simple example XML to JSON

Consider the following example:

<orders>
    <orderId type="normal">1234</orderId>
    <items>
        <item>
            <id>1234</id>
            <qty>1</qty>
        </item>
        <item>
            <id>5678</id>
            <qty>2</qty>
        </item>
    <items>
</orders>

To simply convert it to JSON in OpenAF just execute:

> af.fromXML2Obj("<orders>\n\t<orderId type=\"normal\">1234</orderId>\n\t<items>\n\t\t<item>\n\t\t\t<id>1234</id>\n\t\t\t<qty>1</qty>\n\t\t</item>\n\t\t<item>\n\t\t\t<id>5678</id>\n\t\t\t<qty
>2</qty>\n\t\t</item>\n\t</items>\n</orders>")
{
  "orders": {
    "orderId": "1234",
    "items": {
      "item": [
        {
          "id": "1234",
          "qty": "1"
        },
        {
          "id": "5678",
          "qty": "2"
        }
      ]
    }
  }
}

The result is pretty much similar to the original XML with just one small detail: the orderId's type attribute. It's missing from the final JSON object because JSON doesn't have "an array of attributes per key".

But there is a workaround:

> af.fromXML2Obj("<orders>\n\t<orderId type='normal'>1234</orderId>\n\t<items>\n\t\t<item>\n\t\t\t<id>1234</id>\n\t\t\t<qty>1</qty>\n\t\t</item>\n\t\t<item>\n\t\t\t<id>5678</id>\n\t\t\t<qty>2</qty>\n\t\t</item>\n\t</items>\n</orders>", ["orderId"])
{
  "orders": {
    "orderId": {
      "_type": "normal",
      "_": "1234"
    },
    "items": {
      "item": [
        {
          "id": "1234",
          "qty": "1"
        },
        {
          "id": "5678",
          "qty": "2"
        }
      ]
    }
  }
}

The last optional argument of af.fromXML2Obj is actually an array of keys meaning that if the XML tag name is found on the XML OpenAF will make it a map with keys prefixed with "_".

In this case, "_" is the XML tag associated value (e.g. 1234) and "_type" is the tag attribute type with its corresponding value (e.g. "normal").

Simple example JSON to XML

The reverse of the previous example is also possible:

> var obj = af.fromXML2Obj("<orders>\n\t<orderId type='normal'>1234</orderId>\n\t<items>\n\t\t<item>\n\t\t\t<id>1234</id>\n\t\t\t<qty>1</qty>\n\t\t</item>\n\t\t<item>\n\t\t\t<id>5678</id>\n\t\t\t<qty>2</qty>\n\t\t</item>\n\t</items>\n</orders>");
> af.fromObj2XML(obj);
<orders><orderId>1234</orderId><items><item><id>1234</id><qty>1</qty></item><item><id>5678</id><qty>2</qty></item></items></orders>

Note: Keep in mind that af.fromXML2Obj and af.fromObj2XML are just "simplifiers" to handle XML/JSON conversion. For full support of XML you should use OpenAF's XML plugin.

RSS example

One pratical use for these functions is the hability to easily convert RSS feeds into JSON and JSON into a RSS feed:

> mapArray(af.fromXML2Obj($rest().get("http://feeds.reuters.com/reuters/technologyNews")).rss.channel.item, ["title", "pubDate"])
[
  {
    "title": "Apple says Uighurs targeted in iPhone attack but disputes Google findings",
    "pubDate": "Fri, 06 Sep 2019 20:14:45 -0400"
  },
  {
    "title": "U.S. states launch antitrust probes of tech companies, focus on Facebook, Google",
    "pubDate": "Fri, 06 Sep 2019 18:05:56 -0400"
  },
  {
    "title": "Alphabet says received civil investigative demand from U.S. DoJ",
    "pubDate": "Fri, 06 Sep 2019 17:28:58 -0400"
  },
  {
[...]

Friday, September 6, 2019

Using ElasticSearch in OpenAF

OpenAF comes with builtin support for ElasticSearch. So, among other things, it can log directly to ElasticSearch whenever you use the log* functions. Nevertheless there is an oPack to make it all a little easier. Let's start by installing it:

$ opack install elasticsearch

The ElasticSearch oPack is basically a wrapper around some ElasticSearch functionality aiming to make easier daily ElasticSearch operations (e.g. creating/deleting indexes, export/import data, reindex indexes, etc...). We are going to describe some basic functionality. To start you need to instantiate an object poiting to your ElasticSearch cluster:

load("elasticsearch.js");
var es = new ElasticSearch("http://my.elastic.cluster:9200", "myUser", "myPassword");

Note: the user and password are optional and only needed if your ElasticSearch cluster is protected with user/password.

Creating an index

To create an index you just need to:

> es.createIndex("test", 1, 1);

This will create a new index "test" with 1 primary shard and 1 replica. This operation isn't usually necessary as ElasticSearch will just create any index you try to use.

You can check that it was created by executing:

> es.getIndices()

And checking the resulting array.

Adding/Changing data

To interact with ElasticSearch at the data level the easiest way in OpenAF is to use an OpenAF channel. To easily create an OpenAF channel to connect to the "test" index just execute:

es.createCh("test", ["canonicalPath"], "testCh");

This creates an OpenAF channel "testCh" that allows you to access the test index. You also need to provide a list of keys that can be used to retrieve an unique record (in this case "canonicalPath").

Note: You can also define a pattern of indexes instead of the exact name but you will be limited just to .get* functions. It's also possible instead of a string to provide a function that returns the name of the ElasticSearch index to use (e.g. for example, appending the current date).

To add new data just use the newly created channel:

$ch("testCh").set({
    canonicalPath: "/",
}, {
    isDirectory: yes,
    isFile: no,
    filename: "noname",
    filepath: "/",
    canonicalPath: "/"
    lastModified: 0,
    createTime: 0,
    size: 0
});

Retriving data

If you know how to use OpenAF channels now it becomes easier. To get a map value you just need:

var fileMap = $ch("test").get({ canonicalPath: "/" });

Batch get/set data

To batch insert/change data into ElasticSearch you need to divide all the requests in smaller chuncks of data. Then you just use the .setAll function:

$ch("test").setAll(["canonicalPath"], io.listFiles("/some/path").files);

To obtain a list of keys or values you can use getAll or getKeys:

var q = (m, s) => { return { size: s, query: { query_string: { query: m }}}; };
var listOfSmallFiles = $ch("test").getAll(q("size:<1 AND ", 1000));

Unlike the usual getAll and getKeys behaviour the elasticsearch OpenAF channel type will only retrieves a specific amount of records (default to 10). In this example we created a small function to allow you to query using Lucene query syntax and specifying the limit number of records you wish to retrieve (within the search API limits). Check the ElasticSearch search API for more.

Delete data

To delete data simply use the unset/unsetAll functions:

$ch("test").unset({ canonicalPath: "/" });

Sending emails

The OpenAF included plugin Email allows to easily send emails from any OpenAF script. Let's check it using a simple example:

plugin("Email");
var email = new Email("smtp.gmail.com", "my.email@gmail.com", true, true, false);
email.login("my.email@gmail.com", "myAppPassword");
email.send("Something is wrong", "Something was detected to be very wrong.", [ "someone@somewhere.com" ], [], [], "my.email@gmail.com");

Step by step

Let's translate each line. After including the plugin Email we created a new Email instance for the SMTP server "smtp.gmail.com" for the email account "my.email@gmail.com", turned SSL and TLS on and specified the email wasn't going to contain any HTML.

// Email(aSMTPServer, theFromEmailAddress, useSSL, useTLS, containsHTML)
var email = new Email("smtp.gmail.com", "my.email@gmail.com", true, true, false);

The Email plugin will try to "guess" the right ports to access the SMTP server but if you need to force it you can do it:

email.setPort(12345);

The next step was authenticating with the SMTP server:

email.login(aLogin, aPassword);

And then finally we sent the simple email:

// email.send(aSubjectString, aMessageString, anArrayOfTOs, anArrayOfCCs, anArrayOfBCCs, aFromEmailAddress)
email.send("Something is wrong", "Something was detected to be very wrong.", [ "someone@somewhere.com" ], [], [], "my.email@gmail.com");

How to send a HTML email

To send a HTML email first you will need to specify it on the Email instance creation:

var email = new Email("smtp.gmail.com", "my.email@gmail.com", true, true, true);

Afterwards you add the HTML using the .setHTML function:

email.setHTML("<h1>BIG NEWS</h1>Everything is <b>okay</b>.");

So, if we just defined the email contents why should we defined aMessageString on the email.send function? In case the email client doesn't support HTML emails the aMessageString parameter of the email.send function will be used.

Adding attachments

To add an attachment use the function email.addAttachment:

email.addAttachment("/my/folder/with/attach1.pdf");

Adding images for the HTML content

If you use HTML content you will probably also want to include images. These aren't the usual regular attachments and there is actually 3 options available:

Embed image files

email.setHTML("<html>...<img src=\"cid:myimage.png\"/>...</html>");
email.embedFile("/some/path/myimage.png", "myimage.pn");

Embed image URLs

email.setHTML("<html>...<img src=\"cid:myimage\"/>...</html>");
email.embedURL("https://some.server/some/image.jpg", "myimage");

Automatic embedding images and reference them by URL

email.setHTML("<html>...<img src="https://some.server/some/image.jpg"/>...</html>");
email.addExternalImage("https://some.server/some/image.jpg");

How to debug

After creating the new Email instance just add:

email.getEmailObj().setDebug(true);

Once the email sending operation starts all communication with the SMTP server will be output to stdout/stderr.

Wednesday, September 4, 2019

Quickly validate a YAML file

One of the cons of using YAML (e.g. and any other identation based languages) is forgetting about a tab or a wrong spacing that leads to errors. For example:

jobs:
  #-------------------
  - name: Hello World!
  exec: print('Hello World!')

todo:
  - Hello World!

The problem with this YAML file is on the 4th line since the 3rd line started a map as part of the jobs array but the 4th line is a map entry. One way to quickly check this is using another "one-liner":

$ openaf -i script -e "io.readFileYAML('aYAMLFile.yaml')"

In this case the result would be:

Error while executing operation: YAMLException: bad indentation of a mapping entry at line 4, column 3:
      exec: print('Hello World!')
      ^ (js-yaml_js#1)

Solving the issue:

jobs:
  #-------------------
  - name: Hello World!
    exec: print('Hello World!')

todo:
  - Hello World!

Executing the same one-liner now the result is no errors:

$ openaf -i script -e "io.readFileYAML('aYAMLFile.yaml')"
$

Tuesday, September 3, 2019

Using the SNMP plugin

In OpenAF there are two main SNMP plugins: SNMP and SNMP server. In this article we are going to focus on the SNMP plugin that provide SNMP client functionality.

SNMP has different versions (e.g. 1, 2, 3) that require different settings. Starting with version 1 and 2 all you need is the SNMP connection details and, optinionally, a community:

plugin("SNMP");
var snmp = new SNMP("udp:demo.snmplabs.com/161", "public");  // version 1/2

Checking an OID value

To get the value associated with an OID:

snmp.get("1.3.6.1.2.1.1.3.0");
// {
//  "1.3.6.1.2.1.1.3.0": "36 days, 12:34:56.51"
//}

Sending a trap/inform

To send a trap simply:

snmp.trap("1.3.6.1.4.1.20408.4.1.1.2", [
    { OID: "1.2.3.4.5.6.7.8", type: "s", value: "My error message." }
])

You just need to provide the trap OID and an array of OID based values. Each value can have a different type. The supported types are:

Type Description
i Integer
u Unsigned
c Counter32
s String
x Hex String
d Decimal String
n A null object
o An object id
t Timeticks
a An ip address

To inform it's exactly the same but it will return you a Java response object:

var response = snmp.inform("1.3.6.1.4.1.20408.4.1.1.2", [
    { OID: "1.2.3.4.5.6.7.8", type: "s", value: "My error message." }
])

In contrast sending a trap will return immediately and there won't be any acknowledgement.

Version 3

On version 3 you need to provide a little more information:

plugin("SNMP");
var aTimeout = 3000, aNumberOfRetries = 3;
var snmp = new SNMP("udp:demo.snmplabs.com/161", "public", aTimeout, aNumberOfRetries, 3, {
    engineId      : "8000000001020304",
    authPassphrase: "authKey1",
    privPassphrase: "privKey1",
    authProtocol  : "MD5",
    privProtocol  : "DES",
    securityName  : "usr-md5-des"
})

But all the rest is same as shown previously.

Monday, September 2, 2019

Is defined or undefined

Two of the most common functions used in OpenAF are: isDef and isUnDef. The reason is because javascript variables, when created, are "undefined" and only become defined after a value is assigned to it. So, before using that javascript variable is common to test if it's defined or not.

> var abc
> isDef(abc)
false
> abc = 123
123
> isDef(abc)
true

Ok, what happens if it's undefined?

> var xyz
> isUnDef(xyz)
true
> String(xyz + 123)
NaN

Sunday, September 1, 2019

How to copy CLOBs between two databases

Specially in Oracle is not very easy to get the CLOB field value from a source database and insert/update it on another CLOB field on a target database.

In OpenAF the DB.q and DB.u functions are "CLOB/BLOB" aware and will try to convert them to strings to make it seamless. But there is the DB.Lob functions to handle them in particular.

The next example shows how to retrieve CLOB values from one database and inserting them on a temporary table on a target database:

log("Connecting...");

var db1 = new DB("jdbc:oracle:thin:@//1.2.3.1:1521/SOURCE", "loginSOURCE", "passwordSOURCE");
var db2 = new DB("jdbc:oracle:thin:@//1.2.3.2:1521/TARGET", "loginTARGET", "passwordTARGET");

log("Retrieving data...")

var res = db1.q("select obj_uuid, obj_definition from objects_table");

log("#" + res.results.length + " records retrieved");

log("Copying data...");
db2.u("truncate table TEMP_TABLE"); // Assuming you have a TEMP_TABLE already created on db2

var c = 0;
for(i in res.results) {
   var line = res.results[i];
   c += db2.uLobs("insert into TEMP_TABLE (OBJ_UUID, OBJ_DEFINITION) values (:1, :2)", [ line.OBJ_UUID, line.OBJ_DEFINITION ]);
}

log("#" + c + " records copied.");

db2.commit();
db2.close();
db1.close();

log("Done");

The result will be similar to:

Thu Apr 15 2015 12:32:25 GMT-0400 (EDT) | INFO | Connecting...
Thu Apr 15 2015 12:32:25 GMT-0400 (EDT) | INFO | Retrieving data...
Thu Apr 15 2015 12:32:26 GMT-0400 (EDT) | INFO | #3497 records retrieved
Thu Apr 15 2015 12:32:26 GMT-0400 (EDT) | INFO | Copying data...
Thu Apr 15 2015 12:32:30 GMT-0400 (EDT) | INFO | #3497 records copied.
Thu Apr 15 2015 12:32:30 GMT-0400 (EDT) | INFO | Done

Using arrays with parallel

OpenAF is a mix of Javascript and Java, but "pure" javascript isn't "thread-safe" in the Java world. Nevertheless be...