Markup language logs are usually based on XML, JSON or YAML. They have the advantage of a unique structure and provide tools to create, validate and read these structures. They also have the benefit that they can easily be converted into data structures of various programming languages.
The disadvantage of these formats is that they cannot be read as easily without tools as text line logs.
Even if the markup languages formats have a uniform syntactical structure, the individual formats differ considerably. The Log4j 1.2 XML format consists of XML fragments, where the information is mostly embedded in attributes. The Java Logging API format consists of a complete XML structure; all information is embedded as text in child elements.
The configurable impulse XML Log Reader, impulse JSON Log Reader, and impulse YAML Log Reader allow reading all typical types of markup
language based log formats and are supplied with a number of standard configurations like GELF and Log4J.
, { "thread" : "main", "level" : "INFO", "loggerName" : "de.toem.impulse.test.primary.Log1", "message" : " ACPI: Local APIC address 0xfee00000", "endOfBatch" : false, "loggerFqcn" : "org.apache.logging.log4j.spi.AbstractLogger", "instant" : { "epochSecond" : 1587458274, "nanoOfSecond" : 900000000 }, "threadId" : 1, "threadPriority" : 5 }
The JSON log from above has 2 kind of objects. The log object and the inner time-stamp (instant) object. Both together build one log entry.
Very similar, the Log4j 1.2 XML element, an outer log entry and an inner message element:
<log4j:event logger="de.toem.impulse.test.primary.Log1" timestamp="1589182416309" level="DEBUG" thread="main"> /> <log4j:message><![CDATA[ rcu: ]]></log4j:message> </log4j:event>
And very differnt the next example. The java logging XML has a structure of elements for all information and does not contains any attributes.
<log>
<record>
<date>2009-12-09T10:04:57</date>
<millis>1260371097880</millis>
<sequence>0</sequence>
<logger>com.sun.deploy</logger>
<level>FINE</level>
<class>com.sun.deploy.util.LoggerTraceListener</class>
<method>print</method>
<thread>10</thread>
<message>Reading certificates from 11108 http://pub.admc.com/modeler/modeler-pro.jar | /home/blaine/.java/deployment/cache/6.0/36/363eb424-1a174301.idx
</message>
</record>
...
</log>
The log configuration dialog shall reflect this structure. The first element of the JSON example starts a new log sample. The second one (Timestamp) is part of this log entry, thus the log objects actions is set to "Add to previous sample" (Previous in terms of what was detected before). The last configured object is the "Ignore" pattern. This one is optional and make sense if you have other objects in the log, that shall not be captured. Otherwise an error is thrown for objects that are not part of the configuration.
impulse identifies a resource file by its extension and its content data. To change the file extension for a support record type:
In the next step, it is required to identify the attributes and variables.
JSON and YAML logs only have variables, XML structures have attributes and text elements.
In the JSON example, there are 'thread, level, loggerName, message, endOfBatch, loggerFqcn, threadId, threadPriority' for the main object. The timestamp has: 'epochSecond' and 'nanoOfSecond'.
Log signals are struct signal with multiple members.
Select member names and types for all variables that shall have a member in you log samples (e.g. the message).
Valid member types are:
The descriptor field can optionally contain a content or member descriptor:
For JSON values that shall not find their way into the signal structure, leave the the name blank or set type to 'None'.
The table below shows the extracted members.
To define the time-stamp, first look at the domain base settings in the reader configuration dialog. For logs, you typically have 2 meaningful options; 'Time' or 'Date'. With 'Time' you focus on some kind of duration (e.g. seconds or nonoseconds); 'Date' gives you an absolute position in time using Year, Month, Day, Hours, Seconds and Milliseconds.
There may be log elements without a domain value. In this case you may want to use:
You may extend the domain position with another source value like in our JSON case (e.g one value contains the seconds, another nanoseconds):
Each single sample belongs to one signal. You can organize your samples into one or multiple signals (e.g. a signal per log source). The "Signal/Scope name" section allows to select the signal and scope naming scheme for each log element.
In most cases it makes sense to get the signal name from the log sources - Thus all logs are organized by their source and it is easier a navigate and understand.
You may extend the naming scheme with another source value. If enabled, the value of the source will be put after the signal name (in parentheses).
The XML, JSON and YAML log readers are very much the same from the UI side. But there are slight differences.
The XML reader has this optional setting:
If not checked, the reader will only accept well-formed XML. If checked, the reader wraps the received XML data into a dummy root element.
<dummy> <log4j:event logger="de.toem.impulse.test.secondary.Log3" timestamp="1589182416306" level="WARN" thread="ThreadB">
<log4j:message><![CDATA[get Sinus Wave 0.9629702887498031]]></log4j:message>
</log4j:event> </dummy>
Another difference is the text element. Text content can be mapped to members using this first psoido attribute ("XML Text").