T********i 发帖数: 2416 | 1 今年五月份的访谈。半年多以前了。
截取一段关于ACM (Auto Commit Memory)的描述。也就是9 million IOPS的benchmark。
用脚指头去想。都知道这种东西的性能上限就是PCI bus。他们不做,自然有别人去做
。网卡都能做出来。更何况SSD了。
另外,我已经证明了,即使不用sync IO,单机串联跨DC,除非所有DC一起死掉,否则
consistency and durability都有保证。
最后再强调一遍。搞技术的,丧失了最基本的客观性。到哪里都是被雷的命。早晚而已。
http://www.dcig.com/2012/05/boosting-transactional-performance.
On the other hand, when you write something to Auto Commit Memory, by design
it will be automatically committed. In other words, it is durable across
service interruptions such as power failures.
Note that part of the ACM API will be a write barrier operation, like a
flush, ensuring that the data is cleared from the processor complex, various
levels of CPU caches and what not. Once flushed from the processor complex,
it's automatically persisted to ioMemory.
What attracts a lot of database developers to this new API is the notion of
solving the tail-of-their-transaction log performance inhibitor. By
definition, it is the transaction log through which they can ensure ACID
properties of transactions.
Previously developers had to issue blocking synchronous I/Os at the tail of
their log, to ensure that the most recent writes before service interruption
were durable. With our ACM API they can convert that blocking synchronous
IO to a non-blocking asynchronous IO by maintaining the tail of their
transaction log in auto commit memory.
They may still persist the tail of their log to a backing store but they
will not need to do it synchronously through a blocking IO. If there's an
interruption, for instance upon a system or an application restart, they can
always recover their state through what was persisted in auto commit memory
. So developers are quite keen on that. |
|