1

Topic: Problems with a speed IO

Good afternoon!
There is a cluster configuration on VMware 6.5 (iron Proliant Gen10 (BladeSystem c7000) + 3PAR through FCOE on hp flexfabric). I drive for the speed IO two configurations of Oracle DB11g on virtual machines. One - windows server 2012 R2 (controler LSI3000) on 3PAR lies on a SSD-regiment (type production), the second - OEL7.5 RAC 2-node (controler PVSCSI) on 3PAR lies on 10000-kah (in raid 200 pieces, while type the test). I drive and cannot disperse for a little decent speeds in any way. The Windows on SSD ink show a maximum 2 (not the Gbyte), RAC above 1 Gbit does not produce.  make a helpless gesture, a pier at them everywhere 10 and 20 so type "check fire wood, not those delivered". And I already checked up - on RAC precisely all under the book - both vmdk, and independent, and separate PVSCSI. Changed elevator on noop (was deadline) - gave a gain approximately 10 %, but all the same digits even rather do not converge.
Something can at whom similar was created? I at the given stage should prove  (and ) that "the necessary fire wood on a place", I will be glad to any help in this question!

2

Re: Problems with a speed IO

PyroTechnic wrote:

OEL7.5 RAC 2-node (controler PVSCSI) on 3PAR lies on 10000-kah (in raid 200 pieces, while type the test). I drive and cannot disperse for a little decent speeds in any way.

That drive? The bald? An amount of spindles it any more about speed, and about throughput

3

Re: Problems with a speed IO

PyroTechnic, run at first synthetics, disk (IOmeter or that is pleasant) and network (iperf, for example more). Well and on a configuration of iron the stuffing , in particular, - that there for controlers on a network and SAN, than model of the chassis with  is more interesting smile

4

Re: Problems with a speed IO

Good afternoon!
At us approximately the same problem, but it somewhere in a sheaf vmware-3Par
If to make the synchronous entry by units (for example with the help dd) on 8 (as usual the DB writes) - speed will be somewhere 10-15 mbjt/sek
If to increase the size of the unit to 1 MB or it is more - that like as with speed of record all is normal
Here such still thought - if to launch 10 (20)  and on everyone to launch record - that total speed on all will be 100-150 (200-300) Mb/seconds   such  that irrespective of how many at you logical disks on  (even if different controlers) vmware for them (for specific ) uses one queue of input/conclusion
If a little  - for each of them the queue of input/conclusion
We addressed in technical support vmware and hp - but so anything intelligible and did not receive
As a result on a DB exposed
*.filesystemio_options ='ASYNCH'
*.disk_asynch_io=true
And redo supposed on ssd
The most interesting that if in quality  to use MSA and whether DS3400 - speed of record on them in 3 times above
If find something - write here

5

Re: Problems with a speed IO

landy wrote:

If to make the synchronous entry by units (for example with the help dd) on 8 (as usual the DB writes) - speed will be somewhere 10-15 mbjt/sek
If to increase the size of the unit to 1 MB or it is more - that like as with speed of record all is normal

If I correctly understand, FCOE it over Ethernet
And at Ethernet latency absolutely not small.
IMHO

6

Re: Problems with a speed IO

Probably
But at us it is connected on FC

7

Re: Problems with a speed IO

landy wrote:

Good afternoon!
At us approximately the same problem, but it somewhere in a sheaf vmware-3Par
If to make the synchronous entry by units (for example with the help dd) on 8 (as usual the DB writes) - speed will be somewhere 10-15 mbjt/sek

dd writes synchronously to one flow, never measure dd th. Take fio and for example 8 flows and with depth 32.
fio-name iops-rw=readwrite-bs=1M-size=5G-iodepth=20-runtime=400-directory fio-ioengine libaio-direct=1 - max-jobs=8
-size = that there was time in 50 more  an array
fio-name iops-rw=randread-bs=8K-size=5G-iodepth=20-runtime=400-directory fio-ioengine libaio-direct=1 - max-jobs=8

8

Re: Problems with a speed IO

Well also what?
To launch at 10-20  it is parallel - we receive 10-20 slow "flows".
What for? Well for example we emulate operation arc - archivings redologs - it works in one flow
One redo - one archlog. Now I do not remember already - but when did trace (strace), clarified that
On vmware he writes units on 64 (16???)
If does not change storage - at discovery arclog for copying there is a request (or setting of the big unit) to the device about the maximum size of the unit with which it is possible to work (bsize in terms dd) - the driver returns that is not supported, and
Then process arc writes small units. So most likely problems in the driver from vmware
If, as I wrote, in dd to install bsize=1M and above - that all flies by very quickly
  at some cases on highly loaded system it is possible to receive very big brakes and problems
  a problem not in one flow - and somewhere nearby
Why the system a level enterprise works more slowly than system of initial level on stupid copyings of one file?

9

Re: Problems with a speed IO

> What for? Well for example we emulate operation arc - archivings redologs - it works in one flow
It not important one flow or not one.
REDO can write down all redo-buffer (by default the buffer 256) for time (units).
dd writes 4  (bs) and looks forward to hearing it is written down, real applications do not wait, they write units yet do not settle the own buffer or yet the block device buffer will not be filled.
I.e. it is necessary expose in the device  depth of queue which can to chew your array for reasonable time
/sys/block/*/device/queue_depth
/sys/block/*/queue/nr_requests
And to check up that interrupt handling are engaged all cpu because on million iops all io rests against one cpu.
In general it is necessary on synthetics fio at first to understand where an ass (I all cured once changeover wholesale. A link) and what limits for rand and seq. Also it is necessary not to wait from  for small units of miracles throughput, the hard disk  can read 100-150 units, and is serial , to write  it is possible with  in the speed in  an array.

10

Re: Problems with a speed IO

So when it is archived redolog upon and written to an array cache
But only something very slowly
And support of anything intelligible did not tell

11

Re: Problems with a speed IO

In new versions (with 11.2, like) it is not simply written "as is read" by the big units, and tries to reformat completely a broad gull, getting rid from wastage space - as a result of it archivelog as a rule in the size much less than online redolog
First of all it depends on amount CPU
I had to increase ORL to 2G that the archive made though 1.2-1.5G

12

Re: Problems with a speed IO

landy wrote:

So when it is archived redolog upon and written to an array cache
But only something very slowly
And support of anything intelligible did not tell

a problem with record in  or with archiving ? Also what means slowly? How it defined?

13

Re: Problems with a speed IO

PyroTechnic wrote:

VMware 6.5

If to throw out  and to tear on pure iron - result you strongly surprises.

14

Re: Problems with a speed IO

Alexey Zhidkov wrote:

it is passed...
If to throw out  and to tear on pure iron - result you strongly surprises.

At it  quite normal variant - 3PAR. The author of a subject does not guess yet as operation for example with xyratex is crookedly adjusted.
The author of a subject - here to you direct in the necessary side your thoughts, two links.
the link number 1
the link number 2
It I to that  the invited expert who has transited courses should. Defoltnye adjustments at VMware absolutely not kosher, alas.

15

Re: Problems with a speed IO

Andy_OLAP wrote:

At it  quite normal variant - 3PAR.

Yes without a difference that for . Tried on vmware bases to twist, arrays were hi-end a class, all the same bases with high loading should be torn on bare iron.

16

Re: Problems with a speed IO

Alexey Zhidkov, and the reasons  productivity, obviously, also did not search?