#24218 closed enhancement (fixed)

Implement new metrics-web module for IPv6 relay statistics

Reported by: karsten Owned by: metrics-team
Priority: Medium Milestone:
Component: Metrics/Statistics Version:
Severity: Normal Keywords:
Cc: irl Actual Points:
Parent ID: Points:
Reviewer: iwakeh Sponsor:

Description

The sample graphs I made for #23761 are based on some quick-and-dirty Java code that we need to rewrite in a more robust and more scalable way before putting these graphs on Tor Metrics.

Here's my plan for implementing this module, and I'm curious to hear possible alternatives or improvements:

  • We start with a design quite similar to the recently added webstats module. This basically means creating:
    • a PostgreSQL database schema for import tables and aggregations/views and
    • a Java class to import into the database and run queries.
  • I believe that the data aggregation won't scale to years of data. My hope is that we can solve this in the database by using some triggers to only include newly added data in the aggregation.

Child Tickets

Attachments (1)

data-spec-template.pdf (139.7 KB) - added by iwakeh 21 months ago.

Download all attachments as: .zip

Change History (28)

comment:1 in reply to:  description ; Changed 22 months ago by iwakeh

Component: Metrics/StatisticsMetrics/Website

Moved this to sub-component Website rather than Statistics.

Replying to karsten:

The sample graphs I made for #23761 are based on some quick-and-dirty Java code that we need to rewrite in a more robust and more scalable way before putting these graphs on Tor Metrics.

Here's my plan for implementing this module, and I'm curious to hear possible alternatives or improvements:

I'd suggest to use this module addition to modernize the approach for data aggregation. Even though webstats was added last, it only copies the old-fashioned style from the pre-existing modules.
A new approach could also better address your concerns about the scaling and performance.

...This basically means creating:

  • a PostgreSQL database schema for import tables and aggregations/views and
  • a Java class to import into the database and run queries.

Steps look fine. Maybe, include the csv creation into the java code, too, in order to avoid shell db exports?

  • I believe that the data aggregation won't scale to years of data. My hope is that we can solve this in the database by using some triggers to only include newly added data in the aggregation.

As said above, we should use this additional module to improve&modernize.

comment:2 in reply to:  1 ; Changed 22 months ago by karsten

Replying to iwakeh:

Moved this to sub-component Website rather than Statistics.

Heh, that's where all the tickets in Statistics came from when we split up Website into data-aggregation parts (Statistics) and website/presentation parts (Website). Following that logic, this ticket would belong into Statistics. But maybe the distinction is too artificial to really make sense. I don't feel strongly where we put it, but I think if we pick Website we should consider moving most/all other tickets from Statistics to Website, too.

Replying to karsten:

The sample graphs I made for #23761 are based on some quick-and-dirty Java code that we need to rewrite in a more robust and more scalable way before putting these graphs on Tor Metrics.

Here's my plan for implementing this module, and I'm curious to hear possible alternatives or improvements:

I'd suggest to use this module addition to modernize the approach for data aggregation. Even though webstats was added last, it only copies the old-fashioned style from the pre-existing modules.

What modernizations do you have in mind?

A new approach could also better address your concerns about the scaling and performance.

...This basically means creating:

  • a PostgreSQL database schema for import tables and aggregations/views and
  • a Java class to import into the database and run queries.

Steps look fine. Maybe, include the csv creation into the java code, too, in order to avoid shell db exports?

Yes, that's what the webstats module does, too.

  • I believe that the data aggregation won't scale to years of data. My hope is that we can solve this in the database by using some triggers to only include newly added data in the aggregation.

As said above, we should use this additional module to improve&modernize.

comment:3 in reply to:  2 Changed 22 months ago by iwakeh

Component: Metrics/WebsiteMetrics/Statistics

Replying to karsten:

Replying to iwakeh:

Moved this to sub-component Website rather than Statistics.

Heh, that's where all the tickets in Statistics came from when we split up Website into data-aggregation parts (Statistics) and website/presentation parts (Website). Following that logic, this ticket would belong into Statistics. But maybe the distinction is too artificial to really make sense. I don't feel strongly where we put it, but I think if we pick Website we should consider moving most/all other tickets from Statistics to Website, too.

Sorry for the confusion, that Statistics module happened to live in a blind spot for me. Changed it back.
(Now I know were some tickets are, that I missed recently ... )

Replying to all else later.

comment:4 Changed 21 months ago by karsten

Status: newneeds_review

So, I rewrote the earlier prototype into a metrics-web module that uses a PostgreSQL database. Please review my task-24218 branch.

Here are some first (meta) statistics on how it performs:

  • Processed five weeks of descriptors from 2017-11-01 to 2017-12-04, roughly 500M in XZ-compressed form plus recent descriptors from past three days.
  • Processing took ~12 minutes on my laptop.
  • The resulting database has a size of ~1G before vacuuming and ~150M afterwards.

Remaining tasks:

  • Add a specification of the CSV file and three new graph pages to Tor Metrics. I'll take care of this.
  • Import the descriptor archive since 2008 somewhere, though not necessarily on the production system. I can take care of this, but after the first review round when it's clear whether the database schema can stay.
  • Find a way to test the Database class. I briefly tried testing it with an in-memory HSQLDB database and got it working to some extent. But we're using a few features that are specific to PostgreSQL and that we'll have to replace in these tests. The result would be that we're testing something slightly different that is similar to the PostgreSQL database but not quite the same. And the code in Database looks trivial enough to not contain the major bugs. I think I'd prefer to test the whole code with real descriptors as input and a real test PostgreSQL database to do the aggregation. Let's try to find a testing approach that we can later apply to other modules. (This shouldn't block either review or deployment.)
  • Write a specification of the new CSV file according to what we said we'll do for Sponsor 13.

comment:5 in reply to:  4 ; Changed 21 months ago by iwakeh

Reviewer: iwakeh

Replying to karsten:

So, I rewrote the earlier prototype into a metrics-web module that uses a PostgreSQL database. Please review my task-24218 branch.

Adding this on top of my review list.

Here are some first (meta) statistics on how it performs:

  • Processed five weeks of descriptors from 2017-11-01 to 2017-12-04, roughly 500M in XZ-compressed form plus recent descriptors from past three days.
  • Processing took ~12 minutes on my laptop.
  • The resulting database has a size of ~1G before vacuuming and ~150M afterwards.

Good to know about.

Remaining tasks:

  • Add a specification of the CSV file and three new graph pages to Tor Metrics. I'll take care of this.

New (child) ticket?

  • Import the descriptor archive since 2008 somewhere, though not necessarily on the production system. I can take care of this, but after the first review round when it's clear whether the database schema can stay.

Yep. Will look at that first.

  • Find a way to test the Database class. I briefly tried testing it with an in-memory HSQLDB database and got it working to some extent. But we're using a few features that are specific to PostgreSQL and that we'll have to replace in these tests. The result would be that we're testing something slightly different that is similar to the PostgreSQL database but not quite the same. And the code in Database looks trivial enough to not contain the major bugs. I think I'd prefer to test the whole code with real descriptors as input and a real test PostgreSQL database to do the aggregation. Let's try to find a testing approach that we can later apply to other modules. (This shouldn't block either review or deployment.)

ok.

  • Write a specification of the new CSV file according to what we said we'll do for Sponsor 13.

New ticket.
I attach a pdf with a content structure suggestion for this kind of document. Using pdf is just by accident; I can also move this to a wiki page or pad.

Changed 21 months ago by iwakeh

Attachment: data-spec-template.pdf added

comment:6 in reply to:  5 Changed 21 months ago by karsten

Replying to iwakeh:

Replying to karsten:

Remaining tasks:

  • Add a specification of the CSV file and three new graph pages to Tor Metrics. I'll take care of this.

New (child) ticket?

Not sure if even more (child) tickets will help more. I just updated #23761 for the new graph pages, and I'll add the specification tomorrow either as part of this ticket or as part of #24217.

  • Write a specification of the new CSV file according to what we said we'll do for Sponsor 13.

New ticket.
I attach a pdf with a content structure suggestion for this kind of document. Using pdf is just by accident; I can also move this to a wiki page or pad.

Thanks for starting that! Happy to give that more thoughts, too. The format doesn't matter much for the moment. And note that we already have #24217 for this.

comment:7 Changed 21 months ago by karsten

Please also review commit 57c58b5 in my tasks-24218-23761 branch with the CSV file specification.

comment:8 Changed 21 months ago by iwakeh

Status: needs_reviewneeds_revision

Starting with the SQL as this is most important.
I'll add a ticket later (unless I find an existing one) for (finally) writing up sql coding guide lines and adapting all existing scripts.
From reading intensely through the older sql scripts (for the tech report) I know that the naming in init-servers-ipv6.sql is consistent with the old naming, but there are parts that are really hard to read and often only understandable when reading the Java code in addition.

Here some topics that should go into the guide doc (b/c of the upcoming guide doc, I'm trying to be verbose):

  • Use names (not numbers) for group-by and order-by, which is fine in this script.
  • types and function names etc. shouldn't be used as column or field identifiers, for example count, date, timestamp, and server (the enum type defined in the script). For the server enum I'd suggest using server_enum, so the column (e.g. in table statuses) can be defined as server server_enum NOT NULL,, which makes immediately clear, what that column will contain. Below I add comments to the various table definitions.
  • Names should be self-explanatory as much as reasonably possible. Some names only make sense after reading the table's comment and for others only from reading the java code in addition. For example, count in table aggregated first of all shouldn't be used because of the function count() and secondly it's meaning can only be derived from the table's comment. Changing count to server_count' would make the meaning obvious. Similarly, advertised_bandwidth to advertised_bandwidth_bytes`.
  • For multi-line comments C-style /* ...*/ could and maybe should be used and -- ... only for one-liners. (That's of minor importance, though.)
  • Indentation of code in functions for readability.

Detailed comments (suggesting name changes, asking questions):

  • server_descriptors:
    CREATE TABLE server_descriptors (
      digest BYTEA PRIMARY KEY,  -- digest of what?  Maybe: sha1_desc_digest
      advertised_bandwidth INTEGER, -- in bytes?  Maybe adv_bandwidth_bytes?
      announced BOOLEAN NOT NULL, -- announced_ipv6
      exiting BOOLEAN  -- exit_flag or exit_relay (making obvious that this will be null for bridges)
    );
    
    
  • server enum:
    CREATE TYPE server_enum AS ENUM ('relay', 'bridge');
    
  • statuses
    CREATE TABLE statuses (
      status_id SERIAL PRIMARY KEY, 
      server server NOT NULL,  -- rather: server server_enum NOT NULL,
      timestamp TIMESTAMP WITHOUT TIME ZONE NOT NULL,  -- valid_after
      running INTEGER NOT NULL, -- running_count (otherwise I'd assume this to be a boolean)
      UNIQUE (server, timestamp)
    );
    
  • status_entries
    CREATE TABLE status_entries (
      status_id INTEGER REFERENCES statuses (status_id) NOT NULL,
      digest BYTEA NOT NULL,  -- as above in server_descriptors
      guard BOOLEAN,  -- guard_relay 
      exit BOOLEAN,  -- exit_relay
      confirmed BOOLEAN, -- confirmed_ipv6_relay
      UNIQUE (status_id, digest)
    );
    
  • aggregated
    CREATE TABLE aggregated (  -- aggregated_flags_ipv6
      status_id INTEGER REFERENCES statuses (status_id) NOT NULL,
      guard BOOLEAN,  -- cf. above
      exit BOOLEAN,  -- cf. above
      announced BOOLEAN,  -- cf. above
      exiting BOOLEAN,  -- cf. above
      confirmed BOOLEAN,  -- cf. above
      count INTEGER NOT NULL,  -- server_count or server_count_sum
      advertised_bandwidth BIGINT, -- adv_bandwidth_bytes or adv_bandwidth_bytes_sum
      CONSTRAINT aggregated_unique
        UNIQUE (status_id, guard, exit, announced, exiting, confirmed)
    );
    
  • function aggregate:

Maybe, call it aggregate_flags_ipv6?
For the aggregate function I got the following error running the script:

psql --dbname=reviewipv6 -f modules/servers-ipv6/src/main/resources/init-servers-ipv6.sql 
CREATE TABLE
CREATE TYPE
CREATE TABLE
CREATE TABLE
CREATE TABLE
psql:modules/servers-ipv6/src/main/resources/init-servers-ipv6.sql:75: ERROR:  syntax error at or near "ON"
LINE 9: ON CONFLICT ON CONSTRAINT aggregated_unique
        ^
CREATE VIEW
  • servers_ipv6: Maybe: ipv6servers? And, the date column need to be renamed.

Some java related questions (from only skimming the code):
The package name as mentioned at the top. Why are the classes not declared public? Couldn't the 'ParsedNetwork' of the class names ParsedNetworkStatus and ParsedNetworkDescriptor be ommitted here?

General naming:
Module name should be same as java package (especially without dashes) and maybe change java package to ipv6servers? It seems more readable (to me) than serversipv6.

I continue with an in-depth Java review when the SQL is settled.

The structure looks fine regarding the upcoming re-factoring of metrics-web.

comment:9 Changed 21 months ago by karsten

Status: needs_revisionneeds_review

All good suggestions above! Please do another review of squash commit ba969fe in my tasks-24218-23761 branch.

Regarding the error with ON CONFLICT, can you check which PostgreSQL version you have? That's a relatively new feature that I think was added in 9.5, but I checked that Debian stretch has 9.6, so it should be available. (I tried only locally with the PostgreSQL I got from brew.) If it is not available, I'll have to rewrite things quite a bit.

Regarding the module renaming, I'm fine with ipv6servers. How about I rename things after you're done with your review? Otherwise that's basically going to rewrite all commits.

Regarding your Java-related questions: Classes are not yet public, because they were internal classes at the beginning, but they should be made public now. The reason for prefixing classes with Parsed was to avoid confusions with metrics-lib classes.

Waiting for another review of the database parts before starting the bulk import. And to be clear, renaming things is okay even after finishing the import. But changing types or even table structures would be much harder. If you have any doubts about the database schema aggregating things correctly, please be sure to bring them up here!

comment:10 Changed 21 months ago by karsten

(Thanks!)

comment:11 Changed 21 months ago by iwakeh

A quick glance tells me there is one 'date' column left in the view, which needs to be renamed.

I used psql (PostgreSQL) 9.5.10 (ubuntu), will check with 9.6 later (a quick update of my postgres failed :-/
Would be nice to have the 'on conflict'.
Regarding the structure I am wondering about more indexes, but these could be added later. Tests for the sql are missing to verify he results. Just from reading it seems fine, but just reading.

comment:12 in reply to:  11 ; Changed 21 months ago by karsten

Replying to iwakeh:

A quick glance tells me there is one 'date' column left in the view, which needs to be renamed.

Huh, yes. Fixed now in commit 3c6b01b (where I also updated the pgTAP tests which I forget earlier).

I used psql (PostgreSQL) 9.5.10 (ubuntu), will check with 9.6 later (a quick update of my postgres failed :-/
Would be nice to have the 'on conflict'.

It also works fine in my Debian squeeze VM. I'll for now assume it's a problem with your PostgreSQL version.

Regarding the structure I am wondering about more indexes, but these could be added later.

True, we could add indexes later. Though I didn't spot any obviously missing indexes in my tests so far. The largest amount of data I imported was 2 years, but I didn't test continuous updates after that. I guess we'll learn.

Tests for the sql are missing to verify he results. Just from reading it seems fine, but just reading.

I think that what we have as tests is at least a start. To be honest, I'm not entirely happy with the JUnit/pgTAP mix here. But I hope that we'll improve that over time as we write more modules following this schema.

Green light for archive import?

comment:13 in reply to:  12 Changed 21 months ago by iwakeh

Replying to karsten:

Replying to iwakeh:

A quick glance tells me there is one 'date' column left in the view, which needs to be renamed.

Huh, yes. Fixed now in commit 3c6b01b (where I also updated the pgTAP tests which I forget earlier).

Oh, the sql tests were 'hiding' in test resources! That's why I ignored them earlier. The correct Metrics' environment expects these in test/sql, which can be done now or when this module is refactored.

I used psql (PostgreSQL) 9.5.10 (ubuntu), will check with 9.6 later (a quick update of my postgres failed :-/
Would be nice to have the 'on conflict'.

It also works fine in my Debian squeeze VM. I'll for now assume it's a problem with your PostgreSQL version.

Hmm, debian stretch is the current stable and has 9.6
and provides 9.4. Unless our server gets the current stable soon, it might need the backported postgres.

What version is on the old sqeeze?

Regarding the structure I am wondering about more indexes, but these could be added later.

True, we could add indexes later. Though I didn't spot any obviously missing indexes in my tests so far. The largest amount of data I imported was 2 years, but I didn't test continuous updates after that. I guess we'll learn.

Ok.

Tests for the sql are missing to verify he results. Just from reading it seems fine, but just reading.

I think that what we have as tests is at least a start. To be honest, I'm not entirely happy with the JUnit/pgTAP mix here. But I hope that we'll improve that over time as we write more modules following this schema.

Green light for archive import?

Well, I think it's ok to go.

Could you change the module path from modules/servers-ipv6 to modules/ipv6servers and the package name from serversipv6 to ipv6servers before I review the java code?

comment:14 Changed 21 months ago by karsten

Okay, I pushed commit df98c16 which renames the module from servers-ipv6 (and packages from serversipv6) to ipv6servers. It also moves the pgTAP file to test/sql.

I don't follow the PostgreSQL versions part. The second link you give is for jessie, but we're on stretch now. I believe we should be doing fine with the PostgreSQL version on meronense:

metrics@meronense:~$ psql --version
psql (PostgreSQL) 9.6.6

I started a local import of the 2017 data. If that goes well, I'll import a few more years over night.

comment:15 Changed 20 months ago by karsten

I finished the import of descriptors back to 2007. And while doing so I discovered an issue related to a UNIQUE constraint and NULL values. Databases can really be tricky sometimes.

I just pushed three new commits to my tasks-24218-23761 branch. Still ready for review!

comment:16 Changed 20 months ago by karsten

Cc: irl added

comment:17 Changed 20 months ago by irl

Is there any indentation or spacing we can do to make it clearer where the magrittr pipelines begin/end? For the purpose of making formatting, building up ggplot2 plots can be treated as the same as a %>% so no distinction is needed there:

plot_relays_ipv6 <- function(start, end, path) {
  all_relay_data <- read.csv(
    "/srv/metrics.torproject.org/metrics/shared/stats/ipv6servers.csv",
    colClasses = c("valid_after_date" = "Date")) %>%
    filter(server == "relay")
  start_date <- max(as.Date(start), min(all_relay_data$valid_after_date))
  end_date <- min(as.Date(end), max(all_relay_data$valid_after_date),
    Sys.Date() - 2)
  date_breaks <- date_breaks(as.numeric(end_date - start_date))
  all_relay_data %>%
    filter(valid_after_date >= start_date, valid_after_date <= end_date) %>%
    group_by(valid_after_date) %>%
    summarize(total = sum(server_count_sum_avg),
      announced = sum(server_count_sum_avg[announced_ipv6 == 't']),
      reachable = sum(server_count_sum_avg[reachable_ipv6_relay == 't']),
      exiting = sum(server_count_sum_avg[exiting_ipv6_relay == 't'])) %>%
    merge(data.frame(valid_after_date = seq(start_date, end_date,
      by = "1 day")), all = TRUE) %>%
    gather(total, announced, reachable, exiting, key = "category",
      value = "count") %>%
    ggplot(aes(x = valid_after_date, y = count, colour = category)) +
    geom_line(size = 1) +
    scale_x_date(name = paste("\nThe Tor Project - ",
      "https://metrics.torproject.org/", sep = ""),
      labels = date_format(date_breaks$format),
      date_breaks = date_breaks$major,
      date_minor_breaks = date_breaks$minor) +
    scale_y_continuous(name = "") +
    scale_colour_hue(name = "", h.start = 90,
      breaks = c("total", "announced", "reachable", "exiting"),
      labels = c("Total (IPv4) OR", "IPv6 announced OR", "IPv6 reachable OR",
        "IPv6 exititing")) +
    expand_limits(y = 0) +
    ggtitle("Relays by IP version") +
    theme(legend.position = "top")
  ggsave(filename = path, width = 8, height = 5, dpi = 150)
}

Other than that, the R looks good to me.

comment:18 in reply to:  17 Changed 20 months ago by karsten

Replying to irl:

Is there any indentation or spacing we can do to make it clearer where the magrittr pipelines begin/end? For the purpose of making formatting, building up ggplot2 plots can be treated as the same as a %>% so no distinction is needed there:

Good idea! Changed in a new commit.

Other than that, the R looks good to me.

Thanks for looking!

comment:19 Changed 20 months ago by karsten

I just pushed a new branch tasks-24218-23761-2 to my repository which has all commits squash/fixup squashed and which is rebased to current master.

comment:20 Changed 20 months ago by iwakeh

Please find the refactored branch and some additional commits on my tasks-24218-23761-2 branch. There is one placeholder commit, where a comment explaining the reasoning would be nice and I couldn't come up with a nice comment myself. I hope, all commit comments are self-explanatory.

Regarding R code style I'd suggest breaking lines before operators as we do in Java. It makes the code more readable when the beginning of an indented line indicates what happens. For example:

all_relay_data
    %>% filter(valid_after_date >= start_date, valid_after_date <= end_date)
    %>% group_by(valid_after_date)
    %>% summarize(total = sum(server_count_sum_avg),
        announced = sum(server_count_sum_avg[announced_ipv6 == 't']),
        reachable = sum(server_count_sum_avg[reachable_ipv6_relay == 't']),
        exiting = sum(server_count_sum_avg[exiting_ipv6_relay == 't']))
    %>% merge(data.frame(valid_after_date = seq(start_date, end_date,
        by = "1 day")), all = TRUE)...

Here the beginning of a next pipe step is clearly visible and distinguishable from the continuation of a parameter listing.

The SQL script now works ok with my 9.6 postgres installation (I forgot to upgrade my cluster after the 9.6 installation, which is not done automatically).

Could you post the command for running the pgTap sql tests, so I can create the corresponding ant task w/o experimenting too much?

comment:21 Changed 20 months ago by iwakeh

Added a new ticket for the R style guide discussion: #24707
No pressure to change it here, unless we all agree.

comment:22 in reply to:  20 Changed 20 months ago by karsten

Replying to iwakeh:

Please find the refactored branch and some additional commits on my tasks-24218-23761-2 branch. There is one placeholder commit, where a comment explaining the reasoning would be nice and I couldn't come up with a nice comment myself. I hope, all commit comments are self-explanatory.

Thanks! Those additional commits look fine!

Regarding the placeholder comment, what we're doing there is check whether the line contains an IPv6 address, which by itself contains at least two colons whereas an IPv4 address and subsequent TCP port only contain one colon. Agreed that this deserves a comment. Or two, because we're doing that in two places.

Regarding R code style I'd suggest breaking lines before operators as we do in Java. It makes the code more readable when the beginning of an indented line indicates what happens. For example:

all_relay_data
    %>% filter(valid_after_date >= start_date, valid_after_date <= end_date)
    %>% group_by(valid_after_date)
    %>% summarize(total = sum(server_count_sum_avg),
        announced = sum(server_count_sum_avg[announced_ipv6 == 't']),
        reachable = sum(server_count_sum_avg[reachable_ipv6_relay == 't']),
        exiting = sum(server_count_sum_avg[exiting_ipv6_relay == 't']))
    %>% merge(data.frame(valid_after_date = seq(start_date, end_date,
        by = "1 day")), all = TRUE)...

Here the beginning of a next pipe step is clearly visible and distinguishable from the continuation of a parameter listing.

The formatting I used is based on examples I found that are using tidyr, dplyr, ggplot2, etc. Let's look around what style guides exist for R and not start from what we're doing in Java. But let's do that in 2018 on the other ticket you opened.

The SQL script now works ok with my 9.6 postgres installation (I forgot to upgrade my cluster after the 9.6 installation, which is not done automatically).

Could you post the command for running the pgTap sql tests, so I can create the corresponding ant task w/o experimenting too much?

Should be as simple as psql -f src/test/sql/ipv6servers/test-ipv6servers.sql ipv6servers. (Untested.) Let me know if that doesn't work and you can't get it to work.

What remains to be done after this? Can I prepare deployment?

comment:23 Changed 20 months ago by iwakeh

Status: needs_reviewmerge_ready

Thanks for the commad line (prevents me from trying additional features)! I'll prepare the additional ant task, but that shouldn't halt deployment.

If preparing deployment means testing the code and adding the comment, let's go ahead.
Thus, setting to merge-ready.

Version 0, edited 20 months ago by iwakeh (next)

comment:24 Changed 20 months ago by iwakeh

Please find four more commits adding ant tasks for running pgTAP tests.

The userstats tests don't seem to pass anymore; new ticket?

(Nothing that would prevent deployment.)

Last edited 20 months ago by iwakeh (previous) (diff)

comment:25 in reply to:  24 Changed 20 months ago by karsten

Replying to iwakeh:

Please find four more commits adding ant tasks for running pgTAP tests.

I'm currently working on merging and deploying, but I'll look after that (which might not be today, depending on how long this takes.)

The userstats tests don't seem to pass anymore; new ticket?

Yes, please. (Thanks for catching that!)

comment:26 Changed 20 months ago by iwakeh

Ticket #24713 takes care of the test failures.

comment:27 Changed 20 months ago by karsten

Resolution: fixed
Status: merge_readyclosed

Okay, I merged those four commits and deployed the new module and graphs on Tor Metrics! Everything else will be handled in new tickets. That means we can close this ticket, just in time for the holiday break. Yay! Thanks so much!

Note: See TracTickets for help on using tickets.