Wednesday, December 28, 2016

Quickstart with Angular2

Hi all,

I've wrote a simple tic-tac-toe application using Angular2 framework.
I really think that angular framework brings a major positive change to client side development.


For those who makes their first steps with it, I recommend doing the heroes tutorial from Angular official site:
https://angular.io/docs/ts/latest/tutorial/


In my tic-tac-toe app I've used those angular2 functionalities:
- Two-Way binding
- Multiple Components
- Router
- Service
- Forms


Now i'm working to add some unit tests with Karma & Jasmine.

You can find the source code here:

Download or Clone it, hit "npm start" and have fun.

Monday, December 19, 2016

TIBCO Products - JMS vs. Rendezvous, Queue vs. Topic, Route vs. Bridge

Hi again,

When dealing with TIBCO products you can easily get confused by many related buzz words or methodologies.
I've decided to write a brief comprasion.



Rendezvous JMS  
Radio broadcaster Telephone Real-life example
Yes No Sending messages to all clients
No Yes Requires server
No Yes Persistence
No Yes Reliable messaging
Yes No High-Speed messaging
- Yes Clustering & Failover
  
 
 
Topic Queue  
Publish/Subscribe model Point-to-Point model Architecture
Multiple clients subscribe to the message Only one consumer gets the message Number of clients
No Yes Messages sent by order
No Yes Messages are processes only once
No Yes Destination is known
Yes No Consumer needs to be active
No Yes Consumer Acknowledgement
 
 
 
Bridge Route  
On the same server On different server Route messages to a queue
Yes  No Possible to have different type of source and destination
(For instance: queue to topic)
No Yes (Topic only) Can be transitive
  (a->b->c
means a->c)
Yes Yes (Topic only) Filters via selectors
Yes No Restart JMS server when config changes
- Single-Hop Hops allowed
bridges.conf routes.conf Config file 

Tuesday, August 30, 2016

Tips for a good ESB integration system


1. Build SOA Environment –

Share common logics around processes with exposed services.

Benefits:

Changes will be easy – If a common logic gets changed, then you would only need to deploy a service again and each process would work with the new logic.

Services logic would be shared around ALL applications – 
These days, almost every develop tool or language is able to consume a web-service. That means you can share logic easily around an organization. Use it in order to minimize duplicate code and maintenance.





2. Less-Code and Out Of The Box solutions –

More code = More complexity & maintenance. 
Use products for minimum coding, reliable solutions and support from professional vendors.
 

 

3. End-To-End Tracing –

A good BPM would always have tracing and information regarding messages being transferred in the wire.
 
Suppose message doing this route -
Application A --> Integration System --> Application B, Application C, Application D.
Make sure there's a field to identify the message in each step and each application writes to the same trace log.

Application A writes that message 12203 was created in a folder.
Integration System writes that message 12203 was taken and process started.
Integration System writes that message 12203 is now sent to Application B, Application C, Application D.
Application B writes that message 12203 was received.
etc.

Write important business and technical process data.

Many organizations use SQL Server for tracing but the innovative solution is to use ElasticSearch which includes free-text search on a huge mass of data.

Building reports for customers (based on those traces) is always a good thing because they can help with maintenance.



 

4. Cache layer for lookups –

Reading and writing to and from memory is faster than any I/O.

If many processes uses configurations or lookup values stored in SQL Server then consider an appropriate cache layer which uses memory to store data.

Redis & Memcached are classic for caching, but you can also use NoSQL solutions which stored data in memory first before being persisted (e.g couchbase).





5. XSLTs for Mapping -

DON'T write C#, Java or whatever language code in order to transform XML's.

Develop XML maps ONLY with XSLTs which can be reused in any integration tool which deals with XML transformation.


 

6. Use Stored Procedures for queries -

It's better for your application to execute stored procedure for queries, rather than sending those as text to the DB.
It's easier for a DBA to control & manage organization logic, and to avoid "BAD" queries.


 

7. Dynamic configuration –


Think deeply on every process configurations you should be able to modify on runtime without changing any code.

Don't exaggerate with dynamic configurations because it has a maintenance price. You do not want an interface with a huge number of configurations, if most of them are values which will be remain unchanged. 


 

8. Well-defined schemas –

It's important that schema nodes, elements and attributes would have real limitations (Min/Max occurs, Value types etc.).

It's possible to restrict schema field value types in many ways (int, string, regular expressions, enums etc.).

Benefits:

Data integrity – Well, that's the obvious reason. Data is more reliable on each point of entire process.

Less complexity in process
– Suppose a process need to get an e-mail address from a request message.

If schema would enforce a valid e-mail regex in that field, then there is no need
to perform this validation on process.



 

9. ASync web-services is better than Sync web-services –

If there is a possibility of consuming asynchronous web-service – ALWAYS prefer it over a synchronous web-service.

Synchronic method (Two-Way) would open a session, send a request, wait for a response and only then closes the session.

Asynchrony method (One-Way) would open a session, send a request and closes the session immediately without waiting for a response.

More sessions = More CPU and more memory usage on server.

Take this tip in mind especially with offline interfaces. In those kind of processes the execution doesn't has be immediate so entire interface can last longer. 


 

10. Avoid direct-connection between Production & Test environments –

Some organizations tend to send messages from production to test\development environments.

That's really bad because if a resource isn't available in test\development then production would be affected.
 

 

11. Identical backup environment (or clustered in an active-passive mode) –

Maintain a backup (not-activated) environment for production (an exact copy of it).

Make sure backup environment is always up to date with the latest changes of the WORKING production environment.


 

12. Retry mechanism when using resources –

Consider using retry mechanism on a process which uses resources (SQL, Web-Services etc.).

Take notice you can implement retry only on OFFLINE processes/ASync services (i.e. when no one waits with an open session for an answer from that service).


 

13. Centralized production environments -

With the major communication progress during the last few years, geographic distance is not an issue as it used to be.

Less environments = Less maintenance.

A good centralized environment contains distributed servers which share data processing.
Some integration tools like "BizTalk" includes load balancing mechanism out of the box, but if your tool doesn't - a load balancing software would help.

 

14. Health and performance monitor system –

Monitor & alert crucial resources which is used by processes to early identify issues. "SCOM" and "Nagios" are popular products. Use "Watcher" for elasticsearch logs.

For instance: 
If a server CPU stays on 100% for the last 30 seconds – send an e-mail. 
If files are piling up in a directory which should be empty – send an SMS.



 

14. Archive source messages\requests –

Gives power for an integration system to rerun messages in case process went wrong and to be self-dependent from end-to-end.



 

15. Persistence –

Some integration tools (like BizTalk, which is DB-based) have persistence functionality built-in as part of their engine.
Products that put an emphasis on performance doesn't have it (e.g TIBCO) but it's possible to implement persistence logic inside a process.
Analyze each process before implementing and think when persistence is necessary if everything breaks down.

 

16. Parallel processing –

Use parallel processing if process logic allows it but beware from over-complex.
Keep in-mind that code should be thread-safe.




Tuesday, July 19, 2016

Developing Tibco BW interfaces with Elasticsearch for trace logs – PART 2

Part 1 of this post was generally about elasticsearch. 
That part will explain how to send documents from TIBCO BW 5.13 to Elasticsearch.

Send docs from TIBCO to Elastic

There are two ways to send docs from BW to elastic:

1. Elastic REST API.
2. Elastic Java Client API (Downloaded from here: https://www.elastic.co/guide/en/elasticsearch/client/java-api/index.html).

This post will discuss the first option.
In order to create JSON docs + invoke REST easily - please install JSON & REST Plugin in tibco folder.

Create mapping and appropriate xsd 

  
Let's get back to elastic for a moment.
Open Sense and create a mapping for trace documents that will be sent from Tibco.
Please notice that there aren't any analyzed fields since each field contains single word\string:
 
POST monitors
{
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "monitor": {
      "properties": {
        "ProcessGroup": {
          "type": "string",
          "index": "not_analyzed"
        },
        "ProcessName": {
          "type": "string",
          "index": "not_analyzed"
        },
        "OpName": {
          "type": "string",
          "index": "not_analyzed"
        },
        "Domain": {
          "type": "string",
          "index": "not_analyzed"
        },
        "TraceType": {
          "type": "string",
          "index": "not_analyzed"
        },
        "TraceDateTime": {
          "type": "date",
          "format": "yyyy-MM-dd HH:mm:ss"
        },
        "PatientID": {
          "type": "string",
          "index": "analyzed"
        },
        "MessageDateTime": {
          "type": "string"
        },
        "ApplicationCode": {
          "type": "string",
          "index": "not_analyzed"
        },
        "SrcMessageID": {
          "type": "string",
          "index": "not_analyzed"
        },
        "ProcessID": {
          "type": "string",
          "index": "not_analyzed"
        },
        "OpID": {
          "type": "string",
          "index": "not_analyzed"
        },
        "OpParentID": {
          "type": "string",
          "index": "not_analyzed"
        },
        "HostName": {
          "type": "string",
          "index": "not_analyzed"
        }
      }
    }
  }
}


Go back to Tibco BW and create a schema which match the mapping: 

<?xml version="1.0" encoding="UTF-8"?>
<schema xmlns="http://www.w3.org/2001/XMLSchema"
     xmlns:tns="http://Integration.clalit.org.il/Schemas/TibSrv_Log"
     targetNamespace="http://Integration.clalit.org.il/Schemas/TibSrv_Log"
     elementFormDefault="qualified"
     attributeFormDefault="unqualified">
    <element name="WriteTrace_Request" type="tns:WriteTraceType"/>
    <complexType name="MonitorType">
        <sequence>
            <element name="ProcessGroup" type="string"/>
            <element name="ProcessName" type="string"/>
            <element name="OpName" type="string"/>
            <element name="Domain" type="string"/>
            <element name="TraceType">
                <simpleType>
                    <restriction base="string">
                        <enumeration value="Debug"/>
                        <enumeration value="Info"/>
                        <enumeration value="Warning"/>
                        <enumeration value="Error"/>
                        <enumeration value="Critical"/>
                    </restriction>
                </simpleType>
            </element>
            <element name="TraceDateTime" type="tns:string"/>
            <element name="PatientID" type="string" minOccurs="0"/>
            <element name="MessageDateTime" type="tns:string" minOccurs="0"/>
            <element name="ApplicationCode" type="string" minOccurs="0"/>
            <element name="SrcMessageID" type="string" minOccurs="0"/>
            <element name="ProcessID" type="string" minOccurs="0"/>
            <element name="OpID" type="string" minOccurs="0"/>
            <element name="OpParentID" type="string" minOccurs="0"/>
            <element name="HostName" type="string"/>
            <any namespace="##any" processContents="skip" minOccurs="0" maxOccurs="unbounded"/>
        </sequence>
    </complexType>
    <complexType name="MessageKeyDataType">
        <sequence>
            <element name="KeyValueData" maxOccurs="unbounded">
                <complexType>
                    <sequence>
                        <element name="DataFieldType" type="string"/>
                        <element name="DataFieldValue" type="string"/>
                    </sequence>
                </complexType>
            </element>
        </sequence>
    </complexType>
    <complexType name="ExceptionType">
        <sequence>
            <element name="ProcessID" type="string"/>
            <element name="Class" type="string"/>
            <element name="ProcessStack" type="string"/>
            <element name="MsgCode" type="string"/>
            <element name="Msg" type="string"/>
            <element name="StackTrace" type="string" minOccurs="0"/>
            <element name="Data" minOccurs="0">
                <complexType>
                    <sequence>
                        <any namespace="##any" processContents="skip" minOccurs="0"/>
                    </sequence>
                </complexType>
            </element>
        </sequence>
    </complexType>
    <complexType name="WriteTraceType">
        <sequence>
            <element name="Monitor" type="tns:MonitorType" nillable="true"/>
            <element name="MessageKeyData" type="tns:MessageKeyDataType" nillable="true" minOccurs="0"/>
            <element name="Exception" type="tns:ExceptionType" nillable="true" minOccurs="0"/>
        </sequence>
    </complexType>
    <element name="Monitor" type="tns:MonitorType"/>
    <element name="MessageKeyData" type="tns:MessageKeyDataType"/>
    <element name="Exception" type="tns:ExceptionType"/>
    <simpleType name="string">
        <restriction base="string"/>
    </simpleType>
    <simpleType name="anySimpleType">
        <restriction base="string"/>
    </simpleType>
</schema>


Take notice on:

After HostName element, any element can be placed.
Elasticsearch allows to insert fields which are not included in mapping.
For instance:
  <HostName>Machine1</HostName>
  <NewElem>Hi</NewElem>
  <Another>Again</Another>

Develop BW process 

 Take a look at the whole process:

 







First step is to map to the xsd created above.
It's important to make sure each datetime field is converted to UTC time.
Here is the mapping I've used:



Second, render from xml to JSON is easy using the plugin.
In addition, I found "Remove root" option very useful.


Here is an example at runtime after rendering to JSON:
  


Last step is to invoke REST API using POST method.

Convert JSON string (render output) to base64 and place it in "binary" node "content" element.


That's it! 

All that's left to do is to call this process asynchronously (you surely do NOT want traces to disturb your processes) and bomb elastic search with docs!
Don't forget to use kibana to create nice dashboards for maintenance.

Monday, June 20, 2016

Developing Tibco BW interfaces with Elasticsearch for trace logs – PART 1

Writing trace logs to SQL server is old-fashioned.

Elasticsearch gives many powerful options to search and visualize data:
1. Based on Lucene engine, it's possible to perform a free-text search on data.
2. Visualize data easily with Kibana (which connects to elasticsearch
db).
3. Create alerts based on the data.
4. Many more!

In the first part we'll get to learn a bit of elasticsearch platform.

Getting started

 

Download and install:

Elasticsearch, Kibana, Sense.

Download curl for windows:


 

Start Elasticsearch server and configure mappings


Background


Elasticsearch stores documents, which needs to be sent in JSON format:

a. Each document set should have configured mapping. 
In the example below: mapping created with index named "monitors" and type named "monitor". Types can be reused across several indexes.

b. It's good to have one date field, which will be used as a "Time-Field".

c. Analyzed index (default) – Enables a full-text search on field. It's possible to define various analyzers.
Non-Analyzed index – Full-text search disabled on field. The exact term should be written when searching the doc.

More info here: http://stackoverflow.com/questions/12836642/analyzers-in-elasticsearch

d. It's possible to write documents with fields that doesn't exists in the mapping.

e. It's impossible to modify existing mappings (Only to drop & create). To drop mapping means to delete all documents. 


f. term vs. match - The match query will apply the same standard analyzer to the search term and will therefore match what is stored in the index. The term query does not apply any analyzers to the search term so will only look for that exact term in the index.

h. It's important to partition data by dates (Using Index Templates). For instance: Create date-based indexes (all using the same type).
Please note that after creation of an index template, elasticsearch waits for a first document to be insert in order to automatically build the index. 


Go for it


1. Open command line and start elasticsearch server (default port is 9200):

[Elasticsearch installation]\bin\elasticsearch.bat


2. Open another command line and create mapping:
 Open command line and write the following:

curl -XPOST "localhost:9200/monitors" -d "{ \"settings\" : {
\"number_of_shards\" : 1 }, \"mappings\" : { \"monitor\" : {
\"properties\" : { \"ProcessGroup\": { \"type\": \"string\",
\"index\": \"not_analyzed\" }, \"ProcessName\": { \"type\":
\"string\", \"index\": \"analyzed\" }, \"OpName\": { \"type\":
\"string\", \"index\": \"analyzed\" }, \"Domain\": { \"type\":
\"string\", \"index\": \"not_analyzed\" }, \"TraceType\": {
\"type\": \"string\", \"index\": \"not_analyzed\" },
\"TraceDateTime\": { \"type\": \"date\", \"format\": \"yyyy-MM-
dd HH:mm:ss\" }, \"PatientID\": { \"type\": \"string\", \"index\":
\"analyzed\" }, \"MessageDateTime\": { \"type\": \"string\" },
\"ApplicationCode\": { \"type\": \"string\", \"index\":
\"not_analyzed\" }, \"SrcMessageID\": { \"type\": \"string\",
\"index\": \"analyzed\" }, \"ProcessID\": { \"type\": \"string\",
\"index\": \"not_analyzed\" }, \"OpID\": {\"type\": \"string\",
\"index\": \"not_analyzed\" }, \"OpParentID\": { \"type\":
\"string\", \"index\": \"not_analyzed\" }, \"HostName\": {
\"type\": \"string\", \"index\": \"not_analyzed\"} } } } }"




3. Insert this test document:

{
  "ProcessGroup": "test",
  "ProcessName": "test",
  "OpName": "test",
  "Domain": "test",
  "TraceType": "Info",
  "TraceDateTime": "2016-04-04 04:46:47",
  "PatientID": "test",
  "MessageDateTime": "2016-04-04 04:46:47",
  "ApplicationCode": "test",
  "SrcMessageID": "54000",
  "ProcessID": "test",
  "OpID": "test",
  "OpParentID": "test",
  "HostName": "ohadavn",
  "req1": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
  "req2": "yyyyyyyyyyyyyyyyyyyyyyyyyy"
}


By sending this command:

curl -XPOST "http://localhost:9200/monitors/monitor/?pretty" -
d"{\"ProcessGroup\":\"test\", \"ProcessName\":\"test\",
\"OpName\":\"test\", \"Domain\":\"test\",
\"TraceType\":\"Info\", \"TraceDateTime\":\"2016-04-04
04:46:47\", \"PatientID\":\"test\", \"MessageDateTime\":\"2016-
04-04 04:46:47\", \"ApplicationCode\":\"test\",
\"SrcMessageID\":\"54000\", \"ProcessID\":\"test\",
\"OpID\":\"test\", \"OpParentID\":\"test\",
\"HostName\":\"ohad\",
\"req1\":\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\",
\"req2\":\"yyyyyyyyyyyyyyyyyyyyyyyyyy\" }" 

Pretty ugly right ? don't worry, we'll get started with Sense right away...

Start Kibana server and add a new index pattern


1. Start kibana server (Default port 5601):

[Kibana installation]\bin\kibana.bat


2. Open web-browser:

http://localhost:5601/

3. Add an index-pattern to the mapping configured previously:

Index name or pattern: monitors*
Time-field name: TraceDateTime



4.  On the top menu - choose "Discover" to view all documents
     (currently, only one exists).
     On the right top menu - change to "Last 5 years".


 

Sense – GUI which enables to send commands to elastic


 Press and choose Sense:


1. Delete the index built before by querying:

DELETE /monitors

2. Create Index Template & Mapping:

POST /_template/template_monitors
{
  "template": "monitors*",
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "monitor": {
      "properties": {
        "ProcessGroup": {
          "type": "string",
          "index": "not_analyzed"
        },
        "ProcessName": {
          "type": "string",
          "index": "not_analyzed"
        },
        "OpName": {
          "type": "string",
          "index": "not_analyzed"
        },
        "Domain": {
          "type": "string",
          "index": "not_analyzed"
        },
        "LogLevel": {
          "type": "string",
          "index": "not_analyzed"
        },
        "StartDateTime": {
          "type": "date",
          "format": "yyyy-MM-dd HH:mm:ss"
        },
        "EndDateTime": {
          "type": "date",
          "format": "yyyy-MM-dd HH:mm:ss"
        },
        "PatientID": {
          "type": "string",
          "index": "not_analyzed"
        },
        "MessageDateTime": {
          "type": "string"
        },
        "ApplicationCode": {
          "type": "string",
          "index": "not_analyzed"
        },
        "SrcMessageID": {
          "type": "string",
          "index": "not_analyzed"
        },
        "ProcessID": {
          "type": "string",
          "index": "not_analyzed"
        },
        "OpID": {
          "type": "string",
          "index": "not_analyzed"
        },
        "OpParentID": {
          "type": "string",
          "index": "not_analyzed"
        },
        "HostName": {
          "type": "string",
          "index": "not_analyzed"
        },
        "Status": {
          "type": "string",
          "index": "not_analyzed"
        }
      }
    }
  }
}

3. Insert a test document into an index of current month (will be generated now):

POST monitors-2016-05/monitor
{
  "ProcessGroup": "test",
  "ProcessName": "test",
  "OpName": "test",
  "Domain": "test",
  "LogLevel": "Info",
  "StartDateTime": "2016-05-04 04:46:47",
  "EndDateTime": "2016-05-04 04:47:47",
  "PatientID": "test me please",
  "MessageDateTime": "2016-05-04 04:46:47",
  "ApplicationCode": "test",
  "SrcMessageID": "54000",
  "ProcessID": "test",
  "OpID": "test",
  "OpParentID": "test",
  "HostName": "ohadavn",
  "Status": "10",
  "req1": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
  "req2": "yyyyyyyyyyyyyyyyyyyyyyyyyy"
}

 Basic useful queries:


 Get "monitor" type mapping:


 GET monitors/_mapping/monitor 

 Get all of "monitor":


POST monitors*/monitor/_search
{
  "query": {
    "match_all" : {}
  }
}

  200 documents of "monitor" type:


 GET monitors*/monitor/_search 
 {
    "from" : 0, "size" : 200,
     "query": {
        "term": {
          "_type" :    "monitor"
        }
    } 
 } 

OR

 GET monitors/monitor/_search 
 {
    "from" : 0, "size" : 203
 }

Today's 10 last documents of "monitor" type inserted:


POST monitors*/monitor/_search
{
  "from": 0,
  "size": 10,
  "query": {
          "range": {
            "StartDateTime": {
              "gte": "now-1d/d",
              "lte": "now/d",
              "boost": 2,
              "format": "yyyy-MM-dd HH:mm:ss"
            }
          }
  },
  "sort": {
    "StartDateTime": {
      "order": "desc",
      "ignore_unmapped": "true"
    }
  }
}

Documents of "monitor" type sorted by descending time and between two dates:


POST monitors*/monitor/_search
{
  "query": {
    "range": {
      "StartDateTime": {
        "gte": "2016-04-01 13:59:50",
        "lte": "2016-04-10 13:59:50",
        "boost": 2,
        "format": "yyyy-MM-dd HH:mm:ss"
      }
    }
  },
  "sort": {
    "StartDateTime": {
      "order": "desc",
      "ignore_unmapped": "true"
    }
  }

Document by id:


POST monitors*/monitor/_search
{
  "query": {
    "term": {
      "_id": {
        "value": "AVP-jR7M3HcXFbGJa5pk"
      }
    }
  }
}

OR

POST monitors*/monitor/_search
{
  "query": {
    "term": {
      "_id": "AVP-jR7M3HcXFbGJa5pk"
    }
  }
}

Documents filtered (doesn't effect scoring) by condition ordered by descending date:


POST monitors*/monitor/_search
{
  "query": {
        "filter": {
            "term" : { "ProcessName" : "myApp" }
        }
  },
  "sort": {
    "TraceDateTime": {
      "order": "desc",
      "ignore_unmapped": "true"
    }
  }
}

Contains query:



POST monitors*/monitor/_search
{
  "query": {
      "wildcard": {
        "ProcessName": "*test*"
      }
  },
  "sort": {
    "StartDateTime": {
      "order": "desc",
      "ignore_unmapped": "true"
    }
  }
}


Create snapshot (backup data):


PUT /_snapshot/dbbackup
{
  "type": "fs",
  "settings": {
      "compress": true,
      "location": "dbbackup"
  }
}

PUT /_snapshot/dbbackup/snap
{
  "type": "fs",
  "settings": {
      "compress": true,
      "location": "dbbackup"
  }
}

Restore snapshot:


POST /_snapshot/dbbackup/snap/_restore


Delete by query:


Requires plugin installation:

[Elasticsearch installation]\bin\plugin install delete-by-query

DELETE /monitors/monitor/_query
{
  "query": {
    "term": { "ProcessName" : "proc" }
  }
}


Visualize & Dashboard


I won't get into this subject in this two-series post but that's really
easy to create various data visualizations and dashboards using kibana.


Remarks


In order to send commands to elasticsearch from other clients:

edit 
[Elasticsearch installation]\config\elasticsearch.yml  

and add: 
cluster.name: [cluster name]
network.host: [Server IP Address in network], 127.0.0.1

In order to create snapshots add:
path.repo: ["C:\\path\\to\\snapshots"]

Thank you Blogger, hello Medium

Hey guys, I've been writing in Blogger for almost 10 years this is a time to move on. I'm happy to announce my new blog at Med...