Usage

Uploading data

To upload data, you need the upload command in Uniloader:

NAME:
   uniloader upload - Uploads a source file to a QueueMetrics or QueueMetrics Live instance

USAGE:
   uniloader upload [command options] [arguments...]

OPTIONS:
   --uri value, -u value       The connection URI. Valid URIs start with file:, mysql:, http:, https:
   --login value, -l value     The login for your connection (default: "webqloader")
   --pass value, -p value      The password for your connection (default: "qloader") [$UPASSWD]
   --token value, -t value     In MySQL mode, the partition. In HTTP/S mode, usually blank or server-id
   --splitter value, -x value  A JSON file describing how to split the source into multiple QM instances
   --noActions                 Actions from QM will NOT be sent to the PBX via AMI. Requires HTTP/S.
   --pid value                 The PID file to write. If present, won't start.
   --db-rewriter-json value    The JSON configuration file for database agent and queue rewrites.
   --forced-upload             Will upload data without checking for HWM and will terminate when file is over.

So you usually launch it like:

./uniloader --src /var/log/asterisk/queue_log upload \
            --uri mysql:/queuemetrics --login qm --pass 1234 --token P001

You can avoid passing parameters which value matches the defaults, so if your token is blank, or your user is "webqloader" (as it is the case with default QueueMetrics Live instances), you do not need to pass them explicitly.

Uniloader reads the source file specified in "src" and automatically detects if the file is rotated/rewritten.

When data is being uploaded, Uniloader makes sure that data is not uploaded twice and retries on errors. You can safely restart it at any time and it will automatically synchronize with the current state of the selected back-end.

You should NEVER have multiple Uniloader (or older Qloader) processes point to the same partition on the same instance at the same time. If you do, you will get hard-to-debug data corruption.

Feedback actions: proxying AMI

It is possible for Uniloader to act as a kind of proxy for a remote QueueMetrics instance. This happens by default if you use a HTTP back-end. If you do not want this feature, you need to start Uniloader with the "--noActions" option.

For example:

./uniloader --src /var/log/asterisk/queue_log upload \
            --uri http://my.queuemetrics-live.com/test1234 --pass 1234

All access information to the Asterisk PBX is to be configured on the QueueMetrics instance; for example, if the PBX server is accessible on the address 127.0.0.1 (so the same host Uniloader is running on) and you log-in as "admin" password "amp123", you should edit the configuration properties and make sure that it says:

callfile.dir=tcp:admin:amp123@127.0.0.1
default.webloaderpbx=true
platform.pbx=DIRECTAMI

If you use mode CLASSIC, make sure you include the default QueueMetrics dial-plan in extensions.conf: #include extensions_queuemetrics.conf, and have Asterisk reload the configuration.

The AMI feedback feature works transparently for both Asterisk AMI and FreeSwitch ESL (as long as the platform is set correctly).

You can see actions being performed in QueueMetrics from "System diagnostic tools" and then selecting "Remote commands".

Forced upload of existing queue_log files

It is possible to upload a queue_log file and then have Uniloader terminate as soon as the operation completes.

Just run it as:

uniloader -s /var/log/queue_log.12 upload \
          -u https://my.queuemetrics-live.com/test/ -p 9999 --forced-upload

In this mode:

  • The file is uploaded from the beginning to the end, no matter the current HWM for the instance - you can upload older data, or multiple log files without caring about their sequence.

  • You can run this in parallel to an existing Uniloader service that is uploading current data.

  • You can upload the same file multiple times - if some or all data is already present on the database, it will be skipped.

  • You can apply splitting and rewriting rules - in this case, the process terminates when the last uploader consumes all rows.

In order for data deduplication to be applied, you need a QueueMetrics version 19+ and you need to use the HTTP interface for uploading. On older systems or using direct MySQL access, data will be duplicated, so this should not be used. In the case you suspect duplicate data, see Deduplicating data.

To upload multiple files in a sequence, you could use something like:

for FILE in $( find /var/log/asterisk -type f -iname "queue_log*" -printf "%T+ %p\n" | sort | awk '{print $2}' ) ; \
   do uniloader --src "${FILE}" upload --uri https://my.queuemetrics-live.com/INSTANCE --token "" --pass "PASS" --forced-upload ; done

This gathers all files under /var/log/asterisk that look like partial queue_logs, sorts them and uploads them all.

Rewriting queues and agents

If you run Uniloader with the option --db-rewriter-json and pass a JSON file like the one below:

{
	"type": "mysql",
	"uri": "localhost/queuemetrics",
	"login": "queuemetrics",
	"password": "javadude",
	"shorten-domain": false,
	"sql-agent": "SELECT '' as TENANT, ? as ID ",
	"sql-queue": "SELECT '' as TENANT, ? as ID  "
}

Then every time an agent or queue id is found, a SQL query is run to resolve it to a tuple '(tenant, id)' that in turn is used to create its actual name.

This is useful because sometimes you use simple ids for queues and agents, but such IDs look bad and are not useful for splitting the log into multiple tenants. E.g. if your agent on the queue is called 'SIP/10907686', it would be better to use it as 'SIP/customer7-123' if you know that id '10907686' is agent '123' for tenant 'customer7'.

Rewriting happens after the log is read and before it is split, so the splitter already receives agent and queue fields rewritten.

  • The queries must return exactly one line, with two string fields that are the tenant and the agent id.

  • The placeholders are replaced in the query.

  • If you do not use multiple tenants, always return a blank string as the tenant.

  • It is better to return a complete agent id, like 'Agent/123', rather than just '123'

  • As the tenant name is often the virtual host that thenant uses on your system, you can have it shortened to the first token, e.g. "customer3.mypbx.some" becomes "customer3".

To avoid excessive database load, queries are run just once. It is mandatory that the same query always returns the same result, or multiple runs might produce different queue_log files.

Splitting a single queue_log file into multiple back-ends

If you run a single Asterisk instance on which multiple clients are hosted, chances are that you configure your Asterisk system with a common naming convention, so that all extensions for your client Foo Company are named "foo-123", all queues are named "foo-q1" and so on.

If you do, it is actually possible to split the queue_log file that Asterisk generates into multiple virtual queue_log files. To do this, Uniloader looks for references of the client name in queues and agents, and can optionally rewrite them so that a reference for queue "foo-q1" is sent to a specific QueueMetrics Live instance set up just for Foo Company; and it is rewritten as simply "q1".

To split a single queue_log file you need to create a split file that details what you want done, and then you can launch:

./uniloader --src queue_log.txt upload --splitter splitter.json

Please note that you do not need to specify a "main" rule on the command line. If you do, a copy of the source file will be also uploaded to the main driver, without applying any transformation.

These are sample contents for a splitter.json file:

[
	{
		"uri": "http://my.queuemetrics-live.com/foocompany",
		"login": "webqloader",
		"pass": "verysecure",
		"token": "",
		"matcher": ["foo-"],
		"match": "any",
		"removematch": true,
		"disabled": false,
		"noactions": false,
		"clientname": "foo"
	},
	{
		"uri": "mysql:127.0.0.1/queuemetrics",
		"login": "queuemetrics",
		"pass": "itsasecret",
		"token": "P001",
		"matcher": ["bar-"],
		"match": "any",
		"removematch": false
	}
]

The following items must be specified for each instance.

  • uri: the URI to upload data to. You can mix and match different backends as you see fit

  • login, pass and token: the information required by your back-end

  • matcher: an array of strings that will be searched in the agent and queue fields.

  • match: it can be either "any" (if a string is found, it is considered a match), "prefix" or "suffix"

  • removematch: if true, the matching string is removed from the queue and agent fields

  • disabled: set to true to manually turn off a rule

  • noactions: set to true to turn off AMI actions for this instance, as you would do for the main instance by using the "--noActions" flag.

  • clientname: the name of the instance, that will be injected in the AMI responses using the dialplan variable UNILOADER_CLIENT before they are passed to Asterisk. It will also used to replace the sequence !UNILOADER_CLIENT in your Asterisk channels.

If you avoid setting some item, it is assumed to be a blank string or the "false" boolean value. Defaults you set with the command lines are ignored, so all relevant information must be specified in the JSON file.

Split data is sent only to instances matching the specific split rule; so the main instance you specify on the command line will be fed all data in any case. As you usually do not want this, you can simply avoid entering any "--uri" parameter on the command line.

Splitting FAQs

What happens if one back-end is or becomes unavailable?

Each back end runs in parallel; but if one should lag behind or should not be available, data for it is delayed until the system is fully operational; at that point it will catch up automatically.

You can also safely restart Uniloader even if not all data is currently uploaded to all instances; the only thing you have to consider is that, in case your queue_log is rotated, then only data present in the current queue_log file is uploaded.

This works correctly only for the MySQL and HTTP drivers; in case you specify a file back end, it will be truncated and rebuilt on each invocation.
Can I use different back-ends?

Yes, of course. Mix and match them as you best see fit.

Can I use feedback actions?

Yes - provided that all back-ends are HTTP/S.

What happens to the default back-end?

The default back-end - the one that is specified on the command line - is sent the raw 'queue_log' data. If you don’t need this, you can use a file back-end and point it to '/dev/null', or you can simply omit it.

Do I have to have a splitting rule for all my virtual clients?

No. Only the rules you specify will be applied, so if you do not include a rule for a specific client, the relevant logs will simply be ignored. This means that you may host on the same Asterisk instance clients who use QueueMetrics and clients that don’t.

How do I modify the configuration on a live system?

You can simply create a new JSON file and restart the Uniloader. In a few seconds it will sync again and start tailing the files. The file will be read in parallel by all the different back-ends, so it will not require a proportional amount of disk IOPS.

Why do I need the clientname field?

If you have a scenario where multiple QM instances are fed by the main QueueMetrics instance, it will be handy to have rewriting enabled, so that e.g. the queue called "foo-q1" appears at the QueueMetrics level as simply "q1".

This works fine when uploading data to QueueMetrics, but when actions are performed by that QueueMetrics instance, they will appear as happening on queue "q1" and not on the actual Asterisk queue "foo-q1".

By injecting the variable UNILOADER_CLIENT is therefore possible to edit the actions dialplan and rebuild the correct physical name to be used when performing actions at the Asterisk level.