Fwd: [IO-500] IO500 ISC20 Call for Submission

JB
John Bent
Wed, May 27, 2020 11:20 PM

FYI.  Hope to see some awesome OrangeFS submissions for our virtual IO500
BOF!

Thanks,

John

---------- Forwarded message ---------
From: committee--- via IO-500 io-500@vi4io.org
Date: Fri, May 22, 2020 at 1:53 PM
Subject: [IO-500] IO500 ISC20 Call for Submission
To: io-500@vi4io.org

Deadline: 08 June 2020 AoE

The IO500 http://io500.org/ is now accepting and encouraging submissions
for the upcoming 6th IO500 list. Once again, we are also accepting
submissions to the 10 Node Challenge to encourage the submission of small
scale results. The new ranked lists will be announced via live-stream at a
virtual session. We hope to see many new results.

The benchmark suite is designed to be easy to run and the community has
multiple active support channels to help with any questions. Please note
that submissions of all sizes are welcome; the site has customizable
sorting so it is possible to submit on a small system and still get a very
good per-client score for example. Additionally, the list is about much
more than just the raw rank; all submissions help the community by
collecting and publishing a wider corpus of data. More details below.

Following the success of the Top500 in collecting and analyzing historical
trends in supercomputer technology and evolution, the IO500
http://io500.org/ was created in 2017, published its first list at SC17,
and has grown exponentially since then. The need for such an initiative has
long been known within High-Performance Computing; however, defining
appropriate benchmarks had long been challenging. Despite this challenge,
the community, after long and spirited discussion, finally reached
consensus on a suite of benchmarks and a metric for resolving the scores
into a single ranking.

The multi-fold goals of the benchmark suite are as follows:

  1. Maximizing simplicity in running the benchmark suite
  2. Encouraging optimization and documentation of tuning parameters for
    performance
  3. Allowing submitters to highlight their “hero run” performance numbers
  4. Forcing submitters to simultaneously report performance for
    challenging IO patterns.

Specifically, the benchmark suite includes a hero-run of both IOR and mdtest
configured however possible to maximize performance and establish an
upper-bound for performance. It also includes an IOR and mdtest run with
highly constrained parameters forcing a difficult usage pattern in an
attempt to determine a lower-bound. Finally, it includes a namespace search
as this has been determined to be a highly sought-after feature in HPC
storage systems that has historically not been well-measured. Submitters
are encouraged to share their tuning insights for publication.

The goals of the community are also multi-fold:

  1. Gather historical data for the sake of analysis and to aid
    predictions of storage futures
  2. Collect tuning data to share valuable performance optimizations
    across the community
  3. Encourage vendors and designers to optimize for workloads beyond
    “hero runs”
  4. Establish bounded expectations for users, procurers, and
    administrators

10 Node I/O Challenge

The 10 Node Challenge is conducted using the regular IO500 benchmark,
however, with the rule that exactly 10 client nodes must be used to run
the benchmark. You may use any shared storage with, e.g., any number of
servers. When submitting for the IO500 list, you can opt-in for
“Participate in the 10 compute node challenge only”, then we will not
include the results into the ranked list. Other 10-node node submissions
will be included in the full list and in the ranked list. We will announce
the result in a separate derived list and in the full list but not on the
ranked IO500 list at https://io500.org/.

This information and rules for ISC20 submissions are available here:
https://www.vi4io.org/io500/rules/submission

Thanks,

The IO500 Committee


IO-500 mailing list
IO-500@vi4io.org
https://www.vi4io.org/mailman/listinfo/io-500

FYI. Hope to see some awesome OrangeFS submissions for our virtual IO500 BOF! Thanks, John ---------- Forwarded message --------- From: committee--- via IO-500 <io-500@vi4io.org> Date: Fri, May 22, 2020 at 1:53 PM Subject: [IO-500] IO500 ISC20 Call for Submission To: <io-500@vi4io.org> *Deadline*: 08 June 2020 AoE The IO500 <http://io500.org/> is now accepting and encouraging submissions for the upcoming 6th IO500 list. Once again, we are also accepting submissions to the 10 Node Challenge to encourage the submission of small scale results. The new ranked lists will be announced via live-stream at a virtual session. We hope to see many new results. The benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. Please note that submissions of all sizes are welcome; the site has customizable sorting so it is possible to submit on a small system and still get a very good per-client score for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below. Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 <http://io500.org/> was created in 2017, published its first list at SC17, and has grown exponentially since then. The need for such an initiative has long been known within High-Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking. The multi-fold goals of the benchmark suite are as follows: 1. Maximizing simplicity in running the benchmark suite 2. Encouraging optimization and documentation of tuning parameters for performance 3. Allowing submitters to highlight their “hero run” performance numbers 4. Forcing submitters to simultaneously report performance for challenging IO patterns. Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured however possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly constrained parameters forcing a difficult usage pattern in an attempt to determine a lower-bound. Finally, it includes a namespace search as this has been determined to be a highly sought-after feature in HPC storage systems that has historically not been well-measured. Submitters are encouraged to share their tuning insights for publication. The goals of the community are also multi-fold: 1. Gather historical data for the sake of analysis and to aid predictions of storage futures 2. Collect tuning data to share valuable performance optimizations across the community 3. Encourage vendors and designers to optimize for workloads beyond “hero runs” 4. Establish bounded expectations for users, procurers, and administrators *10 Node I/O Challenge* The 10 Node Challenge is conducted using the regular IO500 benchmark, however, with the rule that exactly *10 client nodes* must be used to run the benchmark. You may use any shared storage with, e.g., any number of servers. When submitting for the IO500 list, you can opt-in for “Participate in the 10 compute node challenge only”, then we will not include the results into the ranked list. Other 10-node node submissions will be included in the full list and in the ranked list. We will announce the result in a separate derived list and in the full list but not on the ranked IO500 list at https://io500.org/. This information and rules for ISC20 submissions are available here: https://www.vi4io.org/io500/rules/submission Thanks, The IO500 Committee _______________________________________________ IO-500 mailing list IO-500@vi4io.org https://www.vi4io.org/mailman/listinfo/io-500