Ideas and tips for temporal unit-testing? -


Has any temporary unit-tested?

I am also not sure whether such words have been fabricated or not, but the issue is that the operations have to be performed within the temporary boundaries. I have some algorithms and I want to test that their execution time expects to increase, and I think a similar test can be used for IO, and what not, like test_timeout or something else

Although the hardware affects the speed of execution because it does not seem trivial then I was thinking that someone has tried it earlier, and if they share their experience You can.

Thanks

Edit: Trying to compile a list of such things in mind that should be kept in mind

Experience some notes from my notes ... We care about the performance of many of our components, To maintain their time, there is a very similar framework (with intestines, we should only call CppUnit or boost: test like we are one K to tend to). We say instead of uniting these "component standards" instead.

  • We do not specify the upper limit on time and then pass / fail ... we just log the time (this is partially related to actually giving a hard performance requirements Customer reluctance, despite being demonstrated, they care about a lot!). (We have tried to pass / fail last time and had a bad experience, especially on developer machines ... lots of bad alarms because an email has arrived or something was indexed in the background)
  • On optimization Working developers can work on getting relevant benchmark bars without having to create a complete system (much like the one that lets you focus on a bit of codebase).
  • Most benchmarks test many iterations of some iterations. Creating lazy resources can mean that the first use of any component can be "a lot of time" associated with it. We log out "first", "later average" and "average all" times. Make sure that you understand the reasons for any significant differences between them. In some cases, we clearly set up benchmark setup time as a personal case.
  • It should be clear, but: Just the code of the time that you really care, not the test environment setup time!
  • You end the test of "real" cases for the benchmark as much as you unite, so the test setup and test runtime are a very long.
  • We have an autotest machine that runs all benchmark overnight and posting logs of all results. In principle, we can graph it or get those flag components which have become less than the goal demonstration. In practice, we do not get to set anything like this.
  • You want an autotext machine that is completely free from other duties (for example, if it is also your SVN server, there is a big checkout, it looks like you have a big display Is regression).
  • Think about other scalar quantities that you want to benchmark over time and plan to support them from the beginning. For example, "Compression ratio achieved", "Skynet AI intelligence" ...
  • Do not let people analyze any of the Benchmark data on sub-minimum space hardware. I have noticed that the design made as a result of a benchmark run on someone's junk lacquer has been wasted at the time of the design, when a high end server - a high end server - something completely different would be indicated!

Comments