Find a Way to Handle Multiple Results for Benchmarks #5

Open
opened 2023-07-06 00:28:48 -04:00 by gballan · 0 comments
Owner

I require every benchmark be run at least four times and average the most consistent/best runs of the three to tease out any flukey runs. Or in extreme cases signify there's a problem with the test setup that the results are so wild. Thus there should be a way to group results together as a singular test for a card to create a comparison with.

This might require a separate model that sits between the "hardware"/"benchmark" models and the "result" model to group the raw results.

I require every benchmark be run at least four times and average the most consistent/best runs of the three to tease out any flukey runs. Or in extreme cases signify there's a problem with the test setup that the results are so wild. Thus there should be a way to group results together as a singular test for a card to create a comparison with. This might require a separate model that sits between the "hardware"/"benchmark" models and the "result" model to group the raw results.
gballan added the
enhancement
label 2023-07-06 00:28:48 -04:00
Sign in to join this conversation.
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: BitGoblin/game-data#5
No description provided.