Find a Way to Handle Multiple Results for Benchmarks #5
Labels
No Label
bug
duplicate
enhancement
help wanted
invalid
question
wontfix
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: BitGoblin/game-data#5
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I require every benchmark be run at least four times and average the most consistent/best runs of the three to tease out any flukey runs. Or in extreme cases signify there's a problem with the test setup that the results are so wild. Thus there should be a way to group results together as a singular test for a card to create a comparison with.
This might require a separate model that sits between the "hardware"/"benchmark" models and the "result" model to group the raw results.