#25161 closed defect (fixed)

Fix any memory problem caused by number of log files to be imported by the webstats module

Reported by: karsten Owned by: iwakeh
Priority: Medium Milestone:
Component: Metrics/CollecTor Version:
Severity: Normal Keywords:
Cc: Actual Points:
Parent ID: Points:
Reviewer: Sponsor:

Description (last modified by iwakeh)

This ticket should enable webstats module to import any number (within reasonable bounds) of logfiles, that are below the 2G limit (cf. comment:12).
The handling of larger logs is treated in #25317.


Inititial report:
I'm running a modified CollecTor that sanitizes webstats with some tweaks towards bulk importing existing webstats. In particular, it reads files in slices of 10 MiB plus another 2 MiB that overlap with the next slice. I just pushed the changes here.

First of all, the runtime is okay. Not great, but okay. It takes 36 minutes to sanitize 10 MiB. We have 927 MiB of files, so 93 slices, which is going to take ~2.5 days.

However, I ran into an out-of-memory problem at the 6th slice:

2018-02-06 13:30:36,499 INFO o.t.c.w.SanitizeWeblogs:116 Processing 20 logs for dist.torproject.org on archeotrichon.torproject.org.
2018-02-06 13:40:28,968 ERROR o.t.c.c.CollecTorMain:71 The webstats module failed: null
java.lang.OutOfMemoryError: null
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
	at java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677)
	at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735)
	at java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
	at org.torproject.collector.webstats.SanitizeWeblogs.findCleanWrite(SanitizeWeblogs.java:127)
	at org.torproject.collector.webstats.SanitizeWeblogs.startProcessing(SanitizeWeblogs.java:91)
	at org.torproject.collector.cron.CollecTorMain.run(CollecTorMain.java:67)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Requested array size exceeds VM limit
	at java.lang.StringCoding$StringEncoder.encode(StringCoding.java:300)
	at java.lang.StringCoding.encode(StringCoding.java:344)
	at java.lang.StringCoding.encode(StringCoding.java:387)
	at java.lang.String.getBytes(String.java:958)
	at org.torproject.descriptor.log.LogDescriptorImpl.collectionToBytes(LogDescriptorImpl.java:119)
	at org.torproject.descriptor.log.WebServerAccessLogImpl.<init>(WebServerAccessLogImpl.java:72)
	at org.torproject.collector.webstats.SanitizeWeblogs.storeSanitized(SanitizeWeblogs.java:147)
	at org.torproject.collector.webstats.SanitizeWeblogs.lambda$findCleanWrite$3(SanitizeWeblogs.java:127)
	at org.torproject.collector.webstats.SanitizeWeblogs$$Lambda$38/1233367077.accept(Unknown Source)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
	at java.util.concurrent.ConcurrentHashMap$EntrySpliterator.forEachRemaining(ConcurrentHashMap.java:3606)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
	at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

I didn't look very closely, but I believe we're running out of memory while writing a sanitized file to disk, in particular while converting a list of strings to a byte array that we're then compressing and writing to disk. If this is the case, can we avoid creating that second "copy" of the file in memory and write lines to the file directly?

Or is this just the operation where we happen to run out of memory from accumulating stuff over time, and where fixing this issue would just mean that we're failing somewhere else, shortly after?

Should I rather let each module run sanitize a single slice and then exit, store which slice has been processed, and run CollecTor in an endless loop? Or something like that, but something that collects all the garbage between slices?

(Note that I still need to check the output and whether that looks okay across slices. Doing that now, unrelated to the issue at hand.)

Child Tickets

Change History (29)

comment:1 Changed 14 months ago by iwakeh

I didn't take a close look at the proposed code changes/additions, yet. In #25100 I suggested to partition the import by date, b/c that gives the reduction of heap usage and also is fine for the bulk import where we know one-file-one-date is true. Why is the overlap needed?

Or is this just the operation where we happen to run out of memory from accumulating stuff over time, and where fixing this issue would just mean that we're failing somewhere else, shortly after?

I think this is the case. We're also - opposed to all other CollecTor modules - compressing before writing.

What heap setting is used, 8G? How many cores are available?

Last edited 14 months ago by iwakeh (previous) (diff)

comment:2 Changed 14 months ago by iwakeh

Quote from #25100 comment 16:

The second quarter 2017 of logs from weschniakowii amounts to 32M (compressed) and can be processed in 36min using 8G. The entire year won't work with just 8G.

85min and 16G are needed for the entire available archives of meronense and weschniakowii together (compressed 59M). The used heap median usage is 8.5G and the max 15.8G.

So, depending on the hardware a conservative import strategy might be to import slices of quarters with subsequent CollecTor runs.

comment:3 in reply to:  1 Changed 14 months ago by karsten

Replying to iwakeh:

I didn't take a close look at the proposed code changes/additions, yet. In #25100 I suggested to partition the import by date, b/c that gives the reduction of heap usage and also is fine for the bulk import where we know one-file-one-date is true.

Yes, my code implements such a partition by date. Though you have a point there with one-file-one-date. I could simplify the code a lot. Let me do that. (I think it won't affect the memory issue, though.)

Or is this just the operation where we happen to run out of memory from accumulating stuff over time, and where fixing this issue would just mean that we're failing somewhere else, shortly after?

I think this is the case. We're also - opposed to all other CollecTor modules - compressing before writing.

Okay. So maybe there's room for improvement here.

What heap setting is used, 8G? How many cores are available?

-Xmx16g with 16 GiB RAM available. 4 cores.

comment:4 Changed 14 months ago by iwakeh

Is it really necessary to do the slicing using java? Assuming such imports are limited in number (once a year?) the slices could simply be provided by moving the logs to be imported in appropriate folder structures and run CollecTor for importing these.

As this is still set to 'new', are you working on this while importing?

comment:5 in reply to:  4 Changed 14 months ago by karsten

Replying to iwakeh:

Is it really necessary to do the slicing using java? Assuming such imports are limited in number (once a year?) the slices could simply be provided by moving the logs to be imported in appropriate folder structures and run CollecTor for importing these.

You mean we should prepare slices manually? How many slices would we need? Just one per year?

As this is still set to 'new', are you working on this while importing?

I'm not working on this. I'm waiting for new hardware to arrive that has more than 16G RAM. I hope to have that available by next week. Feel free to grab the ticket in the meantime!

comment:6 Changed 13 months ago by iwakeh

Owner: changed from metrics-team to iwakeh
Status: newaccepted

I can device the shell script for slicing the imports also making sure the memory consumption of the single runs stays in reasonable limits.
All based on the assumption that the folder structure of the files to be imported will be the same as below 'webstats.tp.o/out'.

comment:7 Changed 13 months ago by karsten

Sounds good!

comment:8 Changed 13 months ago by iwakeh

All aiming at 64G RAM.

comment:9 Changed 13 months ago by iwakeh

Owner: changed from iwakeh to karsten
Status: acceptedassigned

Providing plenty of RAM for the import shortens the processing time quite a bit due to less GC time. The 85min using 16G for the entire available archives of meronense and weschniakowii together (reported here) reduce to just 65 min with 30G (of which only 22G were actually used at peak time, 10G most of the time). Of course, timing depends highly on available cores (here only four were available) and lesser the type of cpu.

If a machine with 64G is available for import it can just be run on the entire 'out' folder of webstats.tp.o and should be fine with 48-56G (assuming that weschniakowii represents one of the hosts with the medium to heavier log load).
In case the import gets interrupted the logs will clearly indicate which hosts were processed successfully. This should be used to move the already completed imports out of the import directory to save processing time. No problem if that is forgotten, CollecTor won't re-add or overwrite anything, but the additional scanning might take longer than without.

Collector properties should be set to single run and have limits turned off for importing the already existing sanitized logs.

I used metrics-lib commit 9f2db9a19 and collector commit 06d1a81d4 and performed some manual checks that the resulting sanitized logs stay the same except for the intended changes (e.g. removal of '?' etc.). All seemed fine.

Assigning to 'karsten' as the import seems ready to go.

Last edited 13 months ago by iwakeh (previous) (diff)

comment:10 Changed 13 months ago by karsten

Great! I'll start processing files next Tuesday or Wednesday.

comment:11 Changed 13 months ago by karsten

Owner: changed from karsten to iwakeh

So, even with 64G RAM I'm running into the very same issue:

2018-02-20 16:40:46,425 INFO o.t.c.w.SanitizeWeblogs:108 Processing logs for dist.torproject.org on archeotrichon.torproject.org.
2018-02-20 16:54:39,815 ERROR o.t.c.c.CollecTorMain:71 The webstats module failed: null
java.lang.OutOfMemoryError: null
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
	at java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677)
	at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735)
	at java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:496)
	at org.torproject.collector.webstats.SanitizeWeblogs.findCleanWrite(SanitizeWeblogs.java:113)
	at org.torproject.collector.webstats.SanitizeWeblogs.startProcessing(SanitizeWeblogs.java:90)
	at org.torproject.collector.cron.CollecTorMain.run(CollecTorMain.java:67)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Requested array size exceeds VM limit
	at java.util.Arrays.copyOf(Arrays.java:3236)
	at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
	at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
	at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:135)
	at org.torproject.descriptor.internal.FileType.decompress(FileType.java:109)
	at org.torproject.collector.webstats.SanitizeWeblogs.lineStream(SanitizeWeblogs.java:190)
	at org.torproject.collector.webstats.SanitizeWeblogs.lambda$findCleanWrite$1(SanitizeWeblogs.java:111)
	at org.torproject.collector.webstats.SanitizeWeblogs$$Lambda$15/894365800.apply(Unknown Source)
	at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267)
	at java.util.TreeMap$ValueSpliterator.forEachRemaining(TreeMap.java:2897)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
	at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
2018-02-20 16:54:39,917 INFO o.t.c.c.ShutdownHook:23 Shutdown in progress ... 
2018-02-20 16:54:39,917 INFO o.t.c.cron.Scheduler:127 Waiting at most 10 minutes for termination of running tasks ... 
2018-02-20 16:54:39,917 INFO o.t.c.cron.Scheduler:132 Shutdown of all scheduled tasks completed successfully.
2018-02-20 16:54:39,918 INFO o.t.c.c.ShutdownHook:25 Shutdown finished. Exiting.

Can you try to optimize that code a little more?

comment:12 Changed 13 months ago by karsten

Looking at the stack trace and the input log files, I noticed that two log files are larger than 2G when decompressed:

3.2G in/webstats/archeotrichon.torproject.org/dist.torproject.org-access.log-20160531
584K in/webstats/archeotrichon.torproject.org/dist.torproject.org-access.log-20160531.xz
2.1G in/webstats/archeotrichon.torproject.org/dist.torproject.org-access.log-20160601
404K in/webstats/archeotrichon.torproject.org/dist.torproject.org-access.log-20160601.xz

I just ran another bulk import with just those two files as import and ran into the same exception.

It seems like we shouldn't attempt to decompress these files into a byte[] in FileType.decompress, because Java can only handle arrays with up to 2 billion elements: https://en.wikipedia.org/wiki/Criticism_of_Java#Large_arrays . Maybe we should work with streams there, not byte[].

comment:13 Changed 13 months ago by iwakeh

Status: assignedneeds_information

Thanks, for identifying the new issue!

I just created a new ticket for this large log issue, #25317.

This ticket here should rather handle the import of a number of lesser sized logs.

In order to see what other work is necessary regarding the import:
Could you run the import again without the problematic files and report the memory use throughout the import?
(That can be achieved by using jconsole and saving the values 'behind' the memory graph.)

comment:14 Changed 13 months ago by iwakeh

Description: modified (diff)
Summary: Fix another memory problem with the webstats bulk importFix any memory problem caused by number of log files to be imported by the webstats module

Chnging summary and title to reflect what is to be done here.

comment:15 in reply to:  13 ; Changed 13 months ago by karsten

Replying to iwakeh:

Thanks, for identifying the new issue!

I just created a new ticket for this large log issue, #25317.

This ticket here should rather handle the import of a number of lesser sized logs.

Okay.

In order to see what other work is necessary regarding the import:
Could you run the import again without the problematic files and report the memory use throughout the import?
(That can be achieved by using jconsole and saving the values 'behind' the memory graph.)

Yes, I'll re-run the import without those two files. If you have a specific command you want me to run for memory usage monitoring, let me know here in the next 15 minutes. Otherwise I'll do my best. :)

comment:16 in reply to:  15 Changed 13 months ago by iwakeh

Replying to karsten:

...

In order to see what other work is necessary regarding the import:
Could you run the import again without the problematic files and report the memory use throughout the import?
(That can be achieved by using jconsole and saving the values 'behind' the memory graph.)


Yes, I'll re-run the import without those two files. If you have a specific command you want me to run for memory usage monitoring, let me know here in the next 15 minutes. Otherwise I'll do my best. :)

No particular command: I would trust you to monitor 'top' and report on that ;-)
I like to use jconsole for memory monitoring and the graphs can be exported easily.

comment:17 Changed 13 months ago by karsten

Alright, going with top then. It's running on a headless machine, so jconsole/jvisualvm are not available, at least not easily. Started.

comment:18 Changed 13 months ago by iwakeh

JConsole does also connect remotely; see JConsole and remote management setup. In a local network, where I suppose you run the import in, that should be fine.

comment:19 Changed 13 months ago by karsten

So, after moving all log files for virtual host dist.torproject.org out of the way, I ran the following bulk import:

2018-02-21 19:41:30,021 INFO o.t.c.c.CollecTorMain:66 Starting webstats module of CollecTor.
[...]
2018-02-21 23:01:54,276 DEBUG o.t.d.l.WebServerAccessLogLine:143 Unmatchable line: '[scrubbed by karsten]'.
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.HashMap.newNode(HashMap.java:1747)
        at java.util.HashMap.putVal(HashMap.java:642)
        at java.util.HashMap.put(HashMap.java:612)
        at java.time.format.DateTimeParseContext.setParsedField(DateTimeParseContext.java:365)
        at java.time.format.DateTimeFormatterBuilder$OffsetIdPrinterParser.parse(DateTimeFormatterBuilder.java:3381)
        at java.time.format.DateTimeFormatterBuilder$CompositePrinterParser.parse(DateTimeFormatterBuilder.java:2208)
        at java.time.format.DateTimeFormatter.parseUnresolved0(DateTimeFormatter.java:2010)
        at java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:1939)
        at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1851)
        at java.time.ZonedDateTime.parse(ZonedDateTime.java:597)
        at org.torproject.descriptor.log.WebServerAccessLogLine.makeLine(WebServerAccessLogLine.java:129)
        at org.torproject.collector.webstats.SanitizeWeblogs.lambda$lineStream$7(SanitizeWeblogs.java:192)
        at org.torproject.collector.webstats.SanitizeWeblogs$$Lambda$34/783273584.apply(Unknown Source)
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
        at java.util.Iterator.forEachRemaining(Iterator.java:116)
        at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
        at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
        at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
        at org.torproject.collector.webstats.SanitizeWeblogs.lineStream(SanitizeWeblogs.java:193)
        at org.torproject.collector.webstats.SanitizeWeblogs.lambda$findCleanWrite$1(SanitizeWeblogs.java:111)
        at org.torproject.collector.webstats.SanitizeWeblogs$$Lambda$15/1716312251.apply(Unknown Source)
        at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267)
        at java.util.TreeMap$ValueSpliterator.forEachRemaining(TreeMap.java:2897)
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
        at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
        at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
        at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
        at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
        at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)

I screwed up the top command and monitored an earlier process ID that was long gone when I started this run, so I can't say anything about memory usage. But it looks like we did run out of memory there.

I'd say let's wait for #25317 and then try again with all logs. It might have an impact.

comment:20 Changed 13 months ago by iwakeh

True, let's wait for #25317, but would it be possible to make the log available to me? Or just a list of the hosts processed.
I'm particularly interested to see which hosts were processed before the OOM. This would help testing before you run the import again.
Thanks!

Last edited 13 months ago by iwakeh (previous) (diff)

comment:21 Changed 13 months ago by iwakeh

I assume you are processing real logs? Otherwise the line DEBUG o.t.d.l.WebServerAccessLogLine:143 Unmatchable line ... shouldn't appear.

comment:22 Changed 13 months ago by karsten

Hmm, I can't send you the logs easily, because there are so many of them. And I think i pasted the most useful parts above anyway.

I did not process real logs. I processed all previously sanitized logs except for dist.torproject.org logs. The log message/exception above looks like a result of running out of memory, not of the request line being ill-formed.

But really, I think focusing on #25317 for now makes more sense. Happy to provide more details from the next test. Hope that's okay!

comment:23 Changed 13 months ago by iwakeh

Yes, #25317 is the focus. Could you just provide output from (please edit the log path, if necessary)

grep -hi "processing " logs/collector-all.*

for the latest run?
That would point out which hosts were processed fine and how many there were and which led to the final exception.

comment:24 Changed 13 months ago by karsten

Sent you that output via private mail.

comment:25 Changed 13 months ago by iwakeh

Thanks for the logs!

Tickets #25329 and f #25329 are ready for review now.

For the next import runs, regarding comment:18 tl;dr:
Assuming server and laptop are in the secured local net only processing public data the shortest way to monitoring remotely is adding the following to the java call

-Dcom.sun.management.jmxremote.port=<free port>
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false

If the server has several ips it might be necessary to add

-Djava.rmi.server.hostname=<ip for jmx>

On the laptop simply run jconsole and enter <ip for jmx>:<free port> in the remote line of the gui.

Of course, more secure settings in many ways are possible, for that see the link above in comment:18.
Hth!

comment:26 Changed 13 months ago by karsten

Changes look good. I'll start a new bulk import process now and will merge later, if everything looks good. Thanks for the jconsole hints!

comment:27 Changed 13 months ago by iwakeh

The import should either be attempted on less than yearly batches or wait for the patch to #25317.

comment:28 Changed 13 months ago by karsten

The import with yearly batches worked out!

Is there anything to be merged here, except for #25317?

If not, can we resolve this ticket?

comment:29 Changed 13 months ago by iwakeh

Resolution: fixed
Status: needs_informationclosed

Now all logfile sizes and counts can be processed.
This issue was resolved thoroughly.
Closing.

Thanks!

Note: See TracTickets for help on using tickets.