How to extract in Splunk monitoring custom dimensions from plugin_instance when we are using collectd? - hadoop

According to "Getting logs and metrics into metricstore" presentation here
slide 23 - GDI: collectd write_http plugin:
"plugin_instance is currently the only dimension extracted in addition to the default available dimensions"
It means that we can send only one custom dimension not longer than 128 symbols.
We are using "collectd-hadoop" plugin to collect apache hadoop metrics, see
Could you give any advice how can I extract a few dimensions in Splunk from plugin_instance in case I will send a few of them like in sample:
"plugin_instance":"[dim1:val1,dim2:val2,dim3:val3]"

Related

Prometheus configuration and http_requests_total

Ι have installed prometheus with the default configuration.
I am at its web interface, on http://localhost/9090/metrics trying to fetch the time series corresponding to the total of http requests.
Filtering out by the name http_requests_total, retrieves several time series with different labels, e.g.
http_requests_total{code='200',handler='targets',instance=localhost:9090,job='prometheus',method='get'}
http_requests_total{code='200',handler='static',instance=localhost:9090,job='prometheus',method='get'}
http_requests_total{code='200',handler='graph',instance=localhost:9090,job='prometheus',method='get'}
[...]
what are all these time series? how can I find the semantics behind each label?
One, if you visit http://localhost:9090/metrics in your browser, you should see something along the lines of:
# HELP prometheus_http_request_duration_seconds Histogram of latencies for HTTP requests.
# TYPE prometheus_http_request_duration_seconds histogram
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.1"} 3
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.2"} 3
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.4"} 3
...
which should explain what the metric measures and hopefully what the labels are intended to represent. If you don't know what a counter/gauge/histogram is, then you should probably RTFM.
And if you want to go in deeper (and have access to the source code of the monitored service, as is the case with Prometheus source code), you can search said source code for the metric name. Note that the metric name in the code may be a substring of the final metric name, as a namespace may be prepended to it (the prometheus_ part in my example above) and for histograms and summaries _count or bucket or something else may be appended. So in the case of the metric above you should search the code for "http_request_duration_seconds" rather than "prometheus_http_request_duration_seconds_bucket".

How to add shape file in map using arcgis javascript even if shape file exceeded the maximum number of records allowed i.e 1000

I am new to arcgis.I am referring documentation of arcgis javascript 3.19 API.I have taken example from that documentation for adding shape file but when I added zip file which contains .shx,.mdf etc file it gives me error like "The maximum number of records allowed (1000) has been exceeded".
Limitations
Files containing more than 1,000 features cannot be added to a map
it's a limitation according to the documentation shapefiles
Link found on the sample app Add Shapefile
How about spliting your file to <1000 shapes ?

Adding logs to a WellSectionWindow

How would one add logs to a well-section window programatically? For the following well-logs within my Petrel input tree and using the code below only "Sonic" log is displayed on the WellSectionWindow
Well
->WellLogs
- Density
- Sonic
- Gamma ray
Borehole borehole = arguments.object as Borehole;
WellSectionWindow wsw = WellSectionWindow.CreateWindow();
wsw.ShowObject(borehole);
Within Petrel(2013.1), I can navigate to the Log element->(right-click)->"Add to template"->"Vertical"->"In new track". I would like to know if something similar could be achieved using Ocean APIs and guide me towards relevant documentation. Also, I'd like to know why "Sonic" log was displayed within the WellSectionWindow in Petrel and how did it get prioritized over Density or Gamma ray log.
The WellLogVersion of a WellLog corresponds to the global well log in the input tree.
If you want to display the log, you can call wsw.ShowObject(wellLogVersion) and it will be displayed.
If you want to control the order of the logs being displayed, you'll need to deal with the format template nodes of the well section templates. The details can be found in the Ocean dev guide, Volume 9, Chapter 3.

The numbers on online dashboard differs from the downloaded CSV in Google Analytics [closed]

I have a "very simple" problem.. When i make a custom report in a filtered view in my Analytics account, the numbers i see on the online dashboard differs from the ones, that are downloaded directly from that report into Excel .csv.. We do cross-domain tracking, and the purpose of the filtered view is to see the domains in the view (basic, suggested by analytics help).
What could be the problem? Do any of you suffer from the same problem? This is very annoying, because we can't trust our numbers..
Thank you in advance,
Adam
Your issue may be sampleing level
samplingLevel=DEFAULT
Optional.Use this parameter to set the sampling level (i.e. the number of sessions used to calculate the result) for a reporting query. The allowed values are consistent with the web interface and include: •DEFAULT — Returns response with a sample size that balances speed and accuracy.
•FASTER — Returns a fast response with a smaller sample size.
•HIGHER_PRECISION — Returns a more accurate response using a large sample size, but this may result in the response being slower.
If not supplied, the DEFAULT sampling level will be used.See the Sampling section for details on how to calculate the percentage of sessions that were used for a query.
There is no way to test if this is your problem, because at this time you cant set sampling level when extracting as CSV nor can you set it in the query explorer. But I wrote an application that will extract information into CSV and allows you to select sampling level. Daimto - Google Analytics Export you can use that to test if this is the problem or not.

RPM Newrelic metric names for browser page load time

In the Dashboard of NewRelic RPM I can see a chart named "Browser page load time".
It contains 4 values, what are the metric names for this chart ?
I got a response from their support team:
EndUser/average_app_without_queue_time
EndUser/average_network_time
EndUser/average_page_rendering_time
EndUser/RB/average_dom_content_load_time
But:
the EndUser/ endpoint didn't specify the average_app_without_queue_time metric.
the EndUser/ endpoint didn't specify the average_page_rendering_time metric.
So I miss two metrics of the above four...
I want simply fetch the data represented in the above chart in JSON format via their API.
Those two metrics, “EndUser/average_app_without_queue_time” and “EndUser/average_page_rendering_time”, and not standalone metrics like the other two, they are calculated metrics. This means that they are not going to be available directly from the API and that instead you will need to pull down their constituent parts and perform the calculation. Here are the two formulas:
average_app_without_queue_time = ("EndUser".total_app_time - "EndUser/RB".total_queue_time) / "EndUser".call_count
and
average_page_rendering_time = "EndUser".average_fe_response_time - "EndUser/RB".average_dom_content_load_time

Resources