Same machine: ASE 12.5.3 Linux vs ASE 12.5.3 Windows - Linux is 25% faster!!

We had a report from one of our clients recently that our Sybase ASE-based 
app was running slowly on their new Linux box. As we develop internally 
against ASE on Windows we thought it would be an interesting test to install 
both the Windows and Linux flavors of ASE 12.5.3 onto the same server and 
look at the relative performance.

We used a Dell PowerEdge 1950 with 2 x Intel Xeon 3.73GHz dual core 
processors.

On one drive we put SuSE Enterprise Linux Server 10 x86-64, and on the other 
we put Windows Server 2003 Std R2 64-bit edition. Both were fresh-off-the-CD 
OS installs.

On both we tested ASE 12.5.3 ESD#7 (the 32-bit version for Windows, the 
64-bit version for Linux).

The database dump and configurations were identical, and both were set up 
with 1 online engine and 2100MB data cache (so as not to allow the Linux box 
to take advantage of the greater cache allowed by the 64-bit version of 
Sybase).

A dump (originating from another 32-bit Windows instance of Sybase) was 
loaded into both Sybase instances, and the indexes were reloaded.

For the purposes of comparison we chose one of our reporting stored procs 
which hits against three large tables, all with 300,000+ rows (and a bunch 
of smaller ones).

Now, for the results...

When booted into Windows we saw an average of 29 seconds for our stored 
proc. (This is in line with the performance we see with this proc on other 
similar Windows-based Sybase instances on our other servers).

The same test in Linux gave average results of 22 seconds.

We were expecting the Linux version to possibly be slightly faster (because 
the OS is lighter-weight) but we certainly weren't expecting performance 
gains of this magnitude.

Has anyone else performed similar tests? What are the Sybase community's 
thoughts on this? Is there something that's pre-tuned (out-of-the-box) in 
the Linux version which might need manually tuning for Windows? (Sorry - 
didn't capture show plans/io/stats so can't comment on whether there was 
different query plans or i/o behavior.)

As an aside, some other observations (and I'm afraid I don't have more 
accurate performance metrics for these):
*) Rebuilding the indexes (by dropping and recreating them) flew by in 
Windows, but took ages under Linux.
*) In our initial tests we forgot to turn off hyperthreading. It didn't make 
much difference to the Windows results, but we saw a huge performance 
degradation in Linux (a 30 second proc would take 10 minutes+). An OS 
difference probably.
*) For our Windows tests, upgrading from Sybase 12.5.2 GA, to 12.5.3 ESD#7 
gave a 25% increase in the performance of our test stored proc.


- Richard 


0
Richard
2/2/2007 9:28:06 PM
sybase.ase.general 8655 articles. 0 followers. Follow

6 Replies
1648 Views

Similar Articles

[PageSpeed] 40
Get it on Google Play
Get it on Apple App Store

Richard Mundell wrote:
> 
> We had a report from one of our clients recently that our Sybase ASE-based
> app was running slowly on their new Linux box. As we develop internally
> against ASE on Windows we thought it would be an interesting test to install
> both the Windows and Linux flavors of ASE 12.5.3 onto the same server and
> look at the relative performance.
> 
> We used a Dell PowerEdge 1950 with 2 x Intel Xeon 3.73GHz dual core
> processors.
> 
> On one drive we put SuSE Enterprise Linux Server 10 x86-64, and on the other
> we put Windows Server 2003 Std R2 64-bit edition. Both were fresh-off-the-CD
> OS installs.
> 
> On both we tested ASE 12.5.3 ESD#7 (the 32-bit version for Windows, the
> 64-bit version for Linux).
> 
> The database dump and configurations were identical, and both were set up
> with 1 online engine and 2100MB data cache (so as not to allow the Linux box
> to take advantage of the greater cache allowed by the 64-bit version of
> Sybase).

	Anything below 4GB is still 32bit. But I can understand the
	issue. Windows can't even support a full 32bit address space
	properly.

> A dump (originating from another 32-bit Windows instance of Sybase) was
> loaded into both Sybase instances, and the indexes were reloaded.
> 
> For the purposes of comparison we chose one of our reporting stored procs
> which hits against three large tables, all with 300,000+ rows (and a bunch
> of smaller ones).
> 
> Now, for the results...
> 
> When booted into Windows we saw an average of 29 seconds for our stored
> proc. (This is in line with the performance we see with this proc on other
> similar Windows-based Sybase instances on our other servers).
> 
> The same test in Linux gave average results of 22 seconds.
> 
> We were expecting the Linux version to possibly be slightly faster (because
> the OS is lighter-weight) but we certainly weren't expecting performance
> gains of this magnitude.
> 
> Has anyone else performed similar tests? What are the Sybase community's
> thoughts on this? Is there something that's pre-tuned (out-of-the-box) in
> the Linux version which might need manually tuning for Windows? (Sorry -
> didn't capture show plans/io/stats so can't comment on whether there was
> different query plans or i/o behavior.)

	This isn't really the issue. The problem is the way Windows
	is designed and its code. When ASE runs, its not just its
	own code that's running but OS library programs (or DLLs
	in Windows parlance). The overall design of how these run
	is the problem.

> As an aside, some other observations (and I'm afraid I don't have more
> accurate performance metrics for these):
> *) Rebuilding the indexes (by dropping and recreating them) flew by in
> Windows, but took ages under Linux.

	If the ASE config options are both the say and the hardware
	is the same, it can only be the way I/O is managed by the
	respective OSes. Did you use raw devices with Linux or use
	its fast file system?

	Also, did you use the same disks and disk layouts?

> *) In our initial tests we forgot to turn off hyperthreading. It didn't make
> much difference to the Windows results, but we saw a huge performance
> degradation in Linux (a 30 second proc would take 10 minutes+). An OS
> difference probably.

	Yes, most likely.

> *) For our Windows tests, upgrading from Sybase 12.5.2 GA, to 12.5.3 ESD#7
> gave a 25% increase in the performance of our test stored proc.

-am	� 2007
0
A
2/4/2007 7:18:22 AM
On 4 Feb, 07:18, "A. M." <amfor...@gmail.com> wrote:
>
>         This isn't really the issue. The problem is the way Windows
>         is designed and its code. When ASE runs, its not just its
>         own code that's running but OS library programs (or DLLs
>         in Windows parlance). The overall design of how these run
>         is the problem.
>

1. What are the design problems, exactly,  with windows and its code,
that lead to the results that the OP sees? (NB "windows sucks" is not
an acceptable answer, this is a group for technical professionals).

2. You point out that ASE loads OS library programs when it runs on
Windows. Please enumerate the platforms (hardware/OS)  on which ASE
runs *without* loading OS library programs. An exact list, please.

3. Please describe what aspects of the 'overall design' of DLL's
causes the 'problem' that the OP sees. Is it inevitably inherent in
any DLL architecure, or specific to the ones supplied with Windows?
(Again detailed technical reasoning please, this is a group for
technical professionals).


Note that I said 'problem' in quotes because so far all  the OP seems
to have demonstrated that a 64-bit version of ASE outperforms a 32-bit
version of ASE. That's a *good* thing, isn't it? If the 64 bit version
was no better then we'd have a problem.

TWJ


0
tonewheel
2/8/2007 1:03:24 PM
You don't specifically mention these items so I'll ask for completeness sake ...

1 - Do you get the same query plan and stats io output from both environments?  (make sure we're comparing apples to apples)

2 - Do you run the queries 2 times, using the first query to pre-warm cache, then take measurements on the 2nd query?  (objective being to minimize effect of physical io's on each query)

3 - Does running the same query in series several times give the same average time as previously observed?  (trying to rule out any odd circumstances/coincidences which could skew measurements on a single-query measurement)

4 - Have you conducted any monitoring of the dataserver during the running of your test queries?  sp_sysmon begin/end sample?  MDA tables (eg, monSysStatement)?  and if you have captured monitoring data, does it look the same for both tests?  (any variations in numbers would be of interest; assuming the same query plan and stats io output I'd be interested in those measurements having to deal with OS interaction ... disk io rates/waits, cpu yields, possibly even cache hit rates; from a MDA point of view you'd want to be looking at the various wait states and total wait times as an initial indicator of differences)

5 - Obviously (?) you minimized the activity of other processes on the hardware during your ASE query tests?  (I find that running "Command & Conquer Generals: Zero Hour" eats up lots of cpu on my AMD-64 ... causes everything else to slow waaaaaaaaay down! ;-)

6 - The only other thing I can think of is some sort of OS monitoring during the tests; objective being to see if there are more 'active' background processes running in Windows than in Linux ... though I'm not real sure a) how to do this in such a way that b) you could compare apples to apples (oranges to tangerines?)

Richard Mundell wrote:
> We had a report from one of our clients recently that our Sybase ASE-based 
> app was running slowly on their new Linux box. As we develop internally 
> against ASE on Windows we thought it would be an interesting test to install 
> both the Windows and Linux flavors of ASE 12.5.3 onto the same server and 
> look at the relative performance.
> 
> We used a Dell PowerEdge 1950 with 2 x Intel Xeon 3.73GHz dual core 
> processors.
> 
> On one drive we put SuSE Enterprise Linux Server 10 x86-64, and on the other 
> we put Windows Server 2003 Std R2 64-bit edition. Both were fresh-off-the-CD 
> OS installs.
> 
> On both we tested ASE 12.5.3 ESD#7 (the 32-bit version for Windows, the 
> 64-bit version for Linux).
> 
> The database dump and configurations were identical, and both were set up 
> with 1 online engine and 2100MB data cache (so as not to allow the Linux box 
> to take advantage of the greater cache allowed by the 64-bit version of 
> Sybase).
> 
> A dump (originating from another 32-bit Windows instance of Sybase) was 
> loaded into both Sybase instances, and the indexes were reloaded.
> 
> For the purposes of comparison we chose one of our reporting stored procs 
> which hits against three large tables, all with 300,000+ rows (and a bunch 
> of smaller ones).
> 
> Now, for the results...
> 
> When booted into Windows we saw an average of 29 seconds for our stored 
> proc. (This is in line with the performance we see with this proc on other 
> similar Windows-based Sybase instances on our other servers).
> 
> The same test in Linux gave average results of 22 seconds.
> 
> We were expecting the Linux version to possibly be slightly faster (because 
> the OS is lighter-weight) but we certainly weren't expecting performance 
> gains of this magnitude.
> 
> Has anyone else performed similar tests? What are the Sybase community's 
> thoughts on this? Is there something that's pre-tuned (out-of-the-box) in 
> the Linux version which might need manually tuning for Windows? (Sorry - 
> didn't capture show plans/io/stats so can't comment on whether there was 
> different query plans or i/o behavior.)
> 
> As an aside, some other observations (and I'm afraid I don't have more 
> accurate performance metrics for these):
> *) Rebuilding the indexes (by dropping and recreating them) flew by in 
> Windows, but took ages under Linux.
> *) In our initial tests we forgot to turn off hyperthreading. It didn't make 
> much difference to the Windows results, but we saw a huge performance 
> degradation in Linux (a 30 second proc would take 10 minutes+). An OS 
> difference probably.
> *) For our Windows tests, upgrading from Sybase 12.5.2 GA, to 12.5.3 ESD#7 
> gave a 25% increase in the performance of our test stored proc.
> 
> 
> - Richard 
> 
> 
0
Mark
2/8/2007 2:25:39 PM
1 - No, we didn't do this. But as we're starting from exactly the same 
database dump, and using the same ESDs, you'd have thought that Sybase 
should be following pretty much the same query plan both times? See 4.

2. Yes. In fact, we ran each test 4 times, and took the average of the last 
3 runs (excluding any abnormally high result).

3. Yes, give or take about 1%.

4. Afraid not. Not sure what having this information would do to help - our 
Sybase code is written for 40+ clients who run a selection of different OS 
and hardware platforms. We're not in a position to tune our code for a 
specific platform (actually, we code on Windows, so if it was going to be 
h/w-specific, you'd have thought our code would be faster on Windows).

5. You mean Sybase isn't supposed to be run while I've got Quake 4 running? 
LOL. Nothing else was running on the machines (other than core OS 
processes). We didn't even have AV software installed.

6. Again, only standard OS processes were running. No other server daemons 
were running on Windows (except the built-in file server stuff which 
should've been dormant at the time anyway).

R.

"Mark A. Parsons" <iron_horse@no_spamola_compuserve.com> wrote in message 
news:45cb40f3$1@forums-1-dub...
> You don't specifically mention these items so I'll ask for completeness 
> sake ...
>
> 1 - Do you get the same query plan and stats io output from both 
> environments?  (make sure we're comparing apples to apples)
>
> 2 - Do you run the queries 2 times, using the first query to pre-warm 
> cache, then take measurements on the 2nd query?  (objective being to 
> minimize effect of physical io's on each query)
>
> 3 - Does running the same query in series several times give the same 
> average time as previously observed?  (trying to rule out any odd 
> circumstances/coincidences which could skew measurements on a single-query 
> measurement)
>
> 4 - Have you conducted any monitoring of the dataserver during the running 
> of your test queries?  sp_sysmon begin/end sample?  MDA tables (eg, 
> monSysStatement)?  and if you have captured monitoring data, does it look 
> the same for both tests?  (any variations in numbers would be of interest; 
> assuming the same query plan and stats io output I'd be interested in 
> those measurements having to deal with OS interaction ... disk io 
> rates/waits, cpu yields, possibly even cache hit rates; from a MDA point 
> of view you'd want to be looking at the various wait states and total wait 
> times as an initial indicator of differences)
>
> 5 - Obviously (?) you minimized the activity of other processes on the 
> hardware during your ASE query tests?  (I find that running "Command & 
> Conquer Generals: Zero Hour" eats up lots of cpu on my AMD-64 ... causes 
> everything else to slow waaaaaaaaay down! ;-)
>
> 6 - The only other thing I can think of is some sort of OS monitoring 
> during the tests; objective being to see if there are more 'active' 
> background processes running in Windows than in Linux ... though I'm not 
> real sure a) how to do this in such a way that b) you could compare apples 
> to apples (oranges to tangerines?)
>
> Richard Mundell wrote:
>> We had a report from one of our clients recently that our Sybase 
>> ASE-based app was running slowly on their new Linux box. As we develop 
>> internally against ASE on Windows we thought it would be an interesting 
>> test to install both the Windows and Linux flavors of ASE 12.5.3 onto the 
>> same server and look at the relative performance.
>>
>> We used a Dell PowerEdge 1950 with 2 x Intel Xeon 3.73GHz dual core 
>> processors.
>>
>> On one drive we put SuSE Enterprise Linux Server 10 x86-64, and on the 
>> other we put Windows Server 2003 Std R2 64-bit edition. Both were 
>> fresh-off-the-CD OS installs.
>>
>> On both we tested ASE 12.5.3 ESD#7 (the 32-bit version for Windows, the 
>> 64-bit version for Linux).
>>
>> The database dump and configurations were identical, and both were set up 
>> with 1 online engine and 2100MB data cache (so as not to allow the Linux 
>> box to take advantage of the greater cache allowed by the 64-bit version 
>> of Sybase).
>>
>> A dump (originating from another 32-bit Windows instance of Sybase) was 
>> loaded into both Sybase instances, and the indexes were reloaded.
>>
>> For the purposes of comparison we chose one of our reporting stored procs 
>> which hits against three large tables, all with 300,000+ rows (and a 
>> bunch of smaller ones).
>>
>> Now, for the results...
>>
>> When booted into Windows we saw an average of 29 seconds for our stored 
>> proc. (This is in line with the performance we see with this proc on 
>> other similar Windows-based Sybase instances on our other servers).
>>
>> The same test in Linux gave average results of 22 seconds.
>>
>> We were expecting the Linux version to possibly be slightly faster 
>> (because the OS is lighter-weight) but we certainly weren't expecting 
>> performance gains of this magnitude.
>>
>> Has anyone else performed similar tests? What are the Sybase community's 
>> thoughts on this? Is there something that's pre-tuned (out-of-the-box) in 
>> the Linux version which might need manually tuning for Windows? (Sorry - 
>> didn't capture show plans/io/stats so can't comment on whether there was 
>> different query plans or i/o behavior.)
>>
>> As an aside, some other observations (and I'm afraid I don't have more 
>> accurate performance metrics for these):
>> *) Rebuilding the indexes (by dropping and recreating them) flew by in 
>> Windows, but took ages under Linux.
>> *) In our initial tests we forgot to turn off hyperthreading. It didn't 
>> make much difference to the Windows results, but we saw a huge 
>> performance degradation in Linux (a 30 second proc would take 10 
>> minutes+). An OS difference probably.
>> *) For our Windows tests, upgrading from Sybase 12.5.2 GA, to 12.5.3 
>> ESD#7 gave a 25% increase in the performance of our test stored proc.
>>
>>
>> - Richard 


0
Richard
2/12/2007 7:31:31 PM
1 & 4 - I agree that they should be the same given the same ESD level.

But I would *NOT* assume they are the same.

I would expect the query plans and stats io's to be the same, but I'd want to double check just to make sure.

I wouldn't necessarily assume that sp_sysmon and monSysStatement return similar values given the different OS's, eg, differences in library performance.

It's easier to do the extra monitoring and verify that everything is working the same ... than to assume everything's working the same when (maybe) it's not, eh.

Richard Mundell wrote:

> 1 - No, we didn't do this. But as we're starting from exactly the same 
> database dump, and using the same ESDs, you'd have thought that Sybase 
> should be following pretty much the same query plan both times? See 4.
> 
> 2. Yes. In fact, we ran each test 4 times, and took the average of the last 
> 3 runs (excluding any abnormally high result).
> 
> 3. Yes, give or take about 1%.
> 
> 4. Afraid not. Not sure what having this information would do to help - our 
> Sybase code is written for 40+ clients who run a selection of different OS 
> and hardware platforms. We're not in a position to tune our code for a 
> specific platform (actually, we code on Windows, so if it was going to be 
> h/w-specific, you'd have thought our code would be faster on Windows).
> 
> 5. You mean Sybase isn't supposed to be run while I've got Quake 4 running? 
> LOL. Nothing else was running on the machines (other than core OS 
> processes). We didn't even have AV software installed.
> 
> 6. Again, only standard OS processes were running. No other server daemons 
> were running on Windows (except the built-in file server stuff which 
> should've been dormant at the time anyway).
> 
> R.
> 
> "Mark A. Parsons" <iron_horse@no_spamola_compuserve.com> wrote in message 
> news:45cb40f3$1@forums-1-dub...
> 
>>You don't specifically mention these items so I'll ask for completeness 
>>sake ...
>>
>>1 - Do you get the same query plan and stats io output from both 
>>environments?  (make sure we're comparing apples to apples)
>>
>>2 - Do you run the queries 2 times, using the first query to pre-warm 
>>cache, then take measurements on the 2nd query?  (objective being to 
>>minimize effect of physical io's on each query)
>>
>>3 - Does running the same query in series several times give the same 
>>average time as previously observed?  (trying to rule out any odd 
>>circumstances/coincidences which could skew measurements on a single-query 
>>measurement)
>>
>>4 - Have you conducted any monitoring of the dataserver during the running 
>>of your test queries?  sp_sysmon begin/end sample?  MDA tables (eg, 
>>monSysStatement)?  and if you have captured monitoring data, does it look 
>>the same for both tests?  (any variations in numbers would be of interest; 
>>assuming the same query plan and stats io output I'd be interested in 
>>those measurements having to deal with OS interaction ... disk io 
>>rates/waits, cpu yields, possibly even cache hit rates; from a MDA point 
>>of view you'd want to be looking at the various wait states and total wait 
>>times as an initial indicator of differences)
>>
>>5 - Obviously (?) you minimized the activity of other processes on the 
>>hardware during your ASE query tests?  (I find that running "Command & 
>>Conquer Generals: Zero Hour" eats up lots of cpu on my AMD-64 ... causes 
>>everything else to slow waaaaaaaaay down! ;-)
>>
>>6 - The only other thing I can think of is some sort of OS monitoring 
>>during the tests; objective being to see if there are more 'active' 
>>background processes running in Windows than in Linux ... though I'm not 
>>real sure a) how to do this in such a way that b) you could compare apples 
>>to apples (oranges to tangerines?)
>>
>>Richard Mundell wrote:
>>
>>>We had a report from one of our clients recently that our Sybase 
>>>ASE-based app was running slowly on their new Linux box. As we develop 
>>>internally against ASE on Windows we thought it would be an interesting 
>>>test to install both the Windows and Linux flavors of ASE 12.5.3 onto the 
>>>same server and look at the relative performance.
>>>
>>>We used a Dell PowerEdge 1950 with 2 x Intel Xeon 3.73GHz dual core 
>>>processors.
>>>
>>>On one drive we put SuSE Enterprise Linux Server 10 x86-64, and on the 
>>>other we put Windows Server 2003 Std R2 64-bit edition. Both were 
>>>fresh-off-the-CD OS installs.
>>>
>>>On both we tested ASE 12.5.3 ESD#7 (the 32-bit version for Windows, the 
>>>64-bit version for Linux).
>>>
>>>The database dump and configurations were identical, and both were set up 
>>>with 1 online engine and 2100MB data cache (so as not to allow the Linux 
>>>box to take advantage of the greater cache allowed by the 64-bit version 
>>>of Sybase).
>>>
>>>A dump (originating from another 32-bit Windows instance of Sybase) was 
>>>loaded into both Sybase instances, and the indexes were reloaded.
>>>
>>>For the purposes of comparison we chose one of our reporting stored procs 
>>>which hits against three large tables, all with 300,000+ rows (and a 
>>>bunch of smaller ones).
>>>
>>>Now, for the results...
>>>
>>>When booted into Windows we saw an average of 29 seconds for our stored 
>>>proc. (This is in line with the performance we see with this proc on 
>>>other similar Windows-based Sybase instances on our other servers).
>>>
>>>The same test in Linux gave average results of 22 seconds.
>>>
>>>We were expecting the Linux version to possibly be slightly faster 
>>>(because the OS is lighter-weight) but we certainly weren't expecting 
>>>performance gains of this magnitude.
>>>
>>>Has anyone else performed similar tests? What are the Sybase community's 
>>>thoughts on this? Is there something that's pre-tuned (out-of-the-box) in 
>>>the Linux version which might need manually tuning for Windows? (Sorry - 
>>>didn't capture show plans/io/stats so can't comment on whether there was 
>>>different query plans or i/o behavior.)
>>>
>>>As an aside, some other observations (and I'm afraid I don't have more 
>>>accurate performance metrics for these):
>>>*) Rebuilding the indexes (by dropping and recreating them) flew by in 
>>>Windows, but took ages under Linux.
>>>*) In our initial tests we forgot to turn off hyperthreading. It didn't 
>>>make much difference to the Windows results, but we saw a huge 
>>>performance degradation in Linux (a 30 second proc would take 10 
>>>minutes+). An OS difference probably.
>>>*) For our Windows tests, upgrading from Sybase 12.5.2 GA, to 12.5.3 
>>>ESD#7 gave a 25% increase in the performance of our test stored proc.
>>>
>>>
>>>- Richard 
> 
> 
> 
0
Mark
2/12/2007 8:00:24 PM
Here are some more things to think about. I have no idea if any of
these are the cause of the difference but they may serve to
demonstrate just how many unknown variables you are working with
here.

 - The 2 versions of ASE are running on different instruction sets
even though they are running on the same processor.

- The 64 bit version of ASE will be compiled and optimised (obviously)
by a 64 bit FBO compiler. Being of a later generation, the 64 bit FBO
may be doing a better job than the 32 bit compiler used to build the
windows 32 bit version.

- The 64 bit Windows OS is encapsulating the 32 bit ASE process and
will have to perform 32->64 bit translations of data structures when
calling into OS kernel and device drivers.

- The 64 bit ASE binary on Linux will have 8-byte data alignment. The
32 bit Windows ASE binary will have 4-byte data alignment. Certainly
in 64 bit mode, Intel x64 processors incur a penalty when accessing
unaligned data. Not sure if this also applies when the CPU is running
a 32 bit binary, but it's another unknown that needs to be tested.

TWJ

0
tonewheel
2/13/2007 2:33:30 PM
Reply: