|
there's always cases where different technology, etc is not applicable, though in general, SSDs seem to be becoming the norm as primary access / caching devices with mechanical HDDs more for longer term / less used data. though where possible i think we're getting to the point where having whole DBs in RAM is happening.
there's no specific formula to work out system resources as different OS versions, code implementations and bugs will lead to variations. the general rule is ensure you've enough bandwidth for what's expected / needed and during pre-testing, ensure that hardware used is able to cope with those sort of loads.
for such things, v2.2 should be better then v2.0 but mileage will likely vary and i suspect it's still not as light as v1.x was (though we're having to do a lot more so is expected) but that needs more work such as the networking core changes on linux builds and further code profiling - but the main thing with v2.2 is to get the key issues ironed out which should be the case now.
from our testing and with the fixes made we got a 512 maxuser stream running at capacity going from ~200% CPU time to around 60% CPU time on the same single core VM being used i.e. it couldn't cope before but now it can.
|