Project

General

Profile

Actions

Bug #12013

open

Reading log data is inefficient in certain cases

Added by Jim Pingle over 3 years ago.

Status:
New
Priority:
Low
Assignee:
-
Category:
Logging
Target version:
Start date:
06/08/2021
Due date:
% Done:

0%

Estimated time:
Plus Target Version:
Release Notes:
Default
Affected Version:
Affected Architecture:

Description

When reading log files, the functions are set to fetch a specific number of lines (e.g. 50, 250, 500) but to get those lines, the entire set of logs is read in, filtered, and then the result trimmed to the appropriate size.

This is done for some logs like the filter log to ensure that even if the user wants 50 lines, that they actually get 50 parsed lines and not an empty log if there were 50 suppressed entries or entries which could not be parsed. This is also necessary for searching because the user wants N results, not to search for items within the N lines returned.

With large log files, this can be very inefficient since it could be reading in many MB of data (and possibly decompressing it at the time) to get a handful of lines. However, knowing when to reach deeper into logs isn't quite so simple to solve. Rather than reading in the whole set, it could check one log file at a time until it has enough results to return. There may also be ways of limiting the probe to only a certain chunk of a file at a time (e.g. if the user wants 50 lines, get the first 50, if there are not enough results, fetch the next 50).

Needs to be quantified in some way to be sure the added logic and iteration isn't slower than just dumping the log data and parsing it the current way.

No data to display

Actions

Also available in: Atom PDF