New issue, on a different server... "The 4K memory pool of named cache tempdb_cache (cache id 3, cachelet id 1) is configured too small for current demands (state 2). Transaction progress may cease or response time may increase." What would cause this? I believe Derek mentioned in the other Discussion Thread that these type of errors are generated when a "create index" or "update stats" is performed on a large table. Is it safe for us to assume that this is the cause for this error message as well? Thank you for your help and time you are setting aside.
![]() |
0 |
![]() |
I'm think I just caught the reason why we may be getting the "The 4K memory pool of named cache tempdb_cache (cache id 3, cachelet id 1) is configured too small" message / error. Two reasons actually... 1) First, the "Run Replacement: relaxed LRU" should be set to "Run Replacement: strict LRU". 2) Doesn't seem to be any logical reason in using a large I/O with tempdb_cache. It should be removed. It's consuming more than half the cache. What are your thoughts? I also notice that the "Spinlock Contention is 12.6 %" output for... sp_cacheconfig tempdb_cache 1> sp_helpcache tempdb_cache 2> go Cache Name Config Size Run Size Overhead ------------------------ ------------- ---------- ---------- tempdb_cache 1000.00 Mb 1000.00 Mb 52.86 Mb ------------------ Cache Binding Information: ------------------ Cache Name Entity Name Type Index Name Status ---------- ----------- ---- ---------- ------ tempdb_cache tempdb database V (return status = 0) 1> sp_cacheconfig tempdb_cache 2> go Cache Name Status Type Config Value Run Value ------------------------------ --------- -------- ------------ ------------ tempdb_cache Active Mixed 1000.00 Mb 1000.00 Mb ------------ ------------ Total 1000.00 Mb 1000.00 Mb ========================================================================== Cache: tempdb_cache, Status: Active, Type: Mixed Config Size: 1000.00 Mb, Run Size: 1000.00 Mb Config Replacement: relaxed LRU, Run Replacement: relaxed LRU Config Partition: 1, Run Partition: 1 IO Size Wash Size Config Size Run Size APF Percent -------- --------- ------------ ------------ ----------- 4 Kb 61440 Kb 0.00 Mb 400.00 Mb 10 32 Kb 61440 Kb 600.00 Mb 600.00 Mb 10 (return status = 0) ------------------------------------------------------------------------------- Cache: tempdb_cache per sec per xact count % of total ------------------------- ------------ ------------ ---------- ---------- Spinlock Contention n/a n/a n/a 12.6 % Utilization n/a n/a n/a 7.0 % Cache Searches Cache Hits 597.1 273.9 179130 99.3 % Found in Wash 0.0 0.0 0 0.0 % Cache Misses 4.4 2.0 1312 0.7 % ------------------------- ------------ ------------ ---------- Total Cache Searches 601.5 275.9 180442 Pool Turnover 4 Kb Pool LRU Buffer Grab 231.4 106.1 69422 99.5 % Grabbed Dirty 0.0 0.0 0 0.0 % 32 Kb Pool LRU Buffer Grab 1.1 0.5 316 0.5 % Grabbed Dirty 0.0 0.0 0 0.0 % ------------------------- ------------ ------------ ---------- Total Cache Turnover 232.5 106.6 69738 Buffer Wash Behavior Statistics Not Available - No Buffers Entered Wash Section Yet Cache Strategy Cached (LRU) Buffers 560.3 257.0 168085 100.0 % Discarded (MRU) Buffers 0.0 0.0 0 0.0 % Large I/O Usage Large I/Os Performed 1.1 0.5 316 100.0 % Large I/Os Denied due to Pool < Prefetch Size 0.0 0.0 0 0.0 % Pages Requested Reside in Another Buffer Pool 0.0 0.0 0 0.0 % ------------------------- ------------ ------------ ---------- Total Large I/O Requests 1.1 0.5 316 Large I/O Detail 32 Kb Pool Pages Cached 8.4 3.9 2528 n/a Pages Used 1.3 0.6 390 15.4 % Dirty Read Behavior Page Requests 0.0 0.0 0 n/a Tuning Recommendations for Data cache : tempdb_cache ------------------------------------- - Consider using Named Caches or Cache partitions or both. > New issue, on a different server... "The 4K memory pool > of named cache tempdb_cache (cache id 3, cachelet id 1) is > configured too small for current demands (state 2). > Transaction progress may cease or response time may > increase." > > What would cause this? I believe Derek mentioned in the > other Discussion Thread that these type of errors are > generated when a "create index" or "update stats" is > performed on a large table. Is it safe for us to assume > that this is the cause for this error message as well? > > Thank you for your help and time you are setting aside.
![]() |
0 |
![]() |
=============================================================================== Sybase Adaptive Server Enterprise System Performance Report =============================================================================== Server Version: Adaptive Server Enterprise/12.5.4/EBF 13381/P/Sun_svr4/O Server Name: SYB_MXP Run Date: Apr 11, 2008 Sampling Started at: Apr 11, 2008 10:10:42 Sampling Ended at: Apr 11, 2008 10:15:42 Sample Interval: 00:05:00 Sample Mode: Reset Counters =============================================================================== =============================================================================== Data Cache Management --------------------- Cache Statistics Summary (All Caches) ------------------------------------- per sec per xact count % of total ------------ ------------ ---------- ---------- Cache Search Summary Total Cache Hits 8635.1 3961.1 2590532 99.9 % Total Cache Misses 4.5 2.0 1335 0.1 % ------------------------- ------------ ------------ ---------- Total Cache Searches 8639.6 3963.1 2591867 Cache Turnover Buffers Grabbed 232.7 106.7 69814 n/a Buffers Grabbed Dirty 0.0 0.0 0 0.0 % Cache Strategy Summary Cached (LRU) Buffers 8538.4 3916.7 2561523 100.0 % Discarded (MRU) Buffers 0.3 0.1 75 0.0 % Large I/O Usage Large I/Os Performed 1.1 0.5 316 99.7 % Large I/Os Denied due to Pool < Prefetch Size 0.0 0.0 1 0.3 % Pages Requested Reside in Another Buffer Pool 0.0 0.0 0 0.0 % ------------------------- ------------ ------------ ---------- Total Large I/O Requests 1.1 0.5 317 Large I/O Effectiveness Pages by Lrg I/O Cached 8.4 3.9 2528 n/a Pages by Lrg I/O Used 247.1 113.3 74124 2932 % Asynchronous Prefetch Activity APFs Issued 0.0 0.0 1 0.0 % APFs Denied Due To APF I/O Overloads 0.0 0.0 0 0.0 % APF Limit Overloads 0.0 0.0 0 0.0 % APF Reused Overloads 0.0 0.0 0 0.0 % APF Buffers Found in Cache With Spinlock Held 0.0 0.0 0 0.0 % W/o Spinlock Held 244.8 112.3 73452 100.0 % ------------------------- ------------ ------------ ---------- Total APFs Requested 244.8 112.3 73453 Other Asynchronous Prefetch Statistics APFs Used 0.0 0.0 1 n/a APF Waits for I/O 0.0 0.0 1 n/a APF Discards 0.0 0.0 0 n/a Dirty Read Behavior Page Requests 0.0 0.0 0 n/a ------------------------------------------------------------------------------- Cache: default data cache per sec per xact count % of total ------------------------- ------------ ------------ ---------- ---------- Spinlock Contention n/a n/a n/a 1.5 % Utilization n/a n/a n/a 93.0 % Cache Searches Cache Hits 8037.7 3687.0 2411321 100.0 % Found in Wash 0.0 0.0 0 0.0 % Cache Misses 0.1 0.0 23 0.0 % ------------------------- ------------ ------------ ---------- Total Cache Searches 8037.8 3687.1 2411344 Pool Turnover 4 Kb Pool LRU Buffer Grab 0.3 0.1 76 100.0 % Grabbed Dirty 0.0 0.0 0 0.0 % ------------------------- ------------ ------------ ---------- Total Cache Turnover 0.3 0.1 76 Buffer Wash Behavior Statistics Not Available - No Buffers Entered Wash Section Yet Cache Strategy Cached (LRU) Buffers 7977.3 3659.3 2393178 100.0 % Discarded (MRU) Buffers 0.3 0.1 75 0.0 % Large I/O Usage Large I/Os Performed 0.0 0.0 0 0.0 % Large I/Os Denied due to Pool < Prefetch Size 0.0 0.0 0 0.0 % Pages Requested Reside in Another Buffer Pool 0.0 0.0 1 100.0 % ------------------------- ------------ ------------ ---------- Total Large I/O Requests 0.0 0.0 1 Large I/O Detail 32 Kb Pool Pages Cached 0.0 0.0 0 n/a Pages Used 0.0 0.0 0 n/a Dirty Read Behavior Page Requests 0.0 0.0 0 n/a ------------------------------------------------------------------------------- Cache: log_cache per sec per xact count % of total ------------------------- ------------ ------------ ---------- ---------- Spinlock Contention n/a n/a n/a 0.0 % Utilization n/a n/a n/a 0.0 % Cache Searches Cache Hits 0.3 0.1 81 100.0 % Found in Wash 0.0 0.0 0 0.0 % Cache Misses 0.0 0.0 0 0.0 % ------------------------- ------------ ------------ ---------- Total Cache Searches 0.3 0.1 81 Pool Turnover 0.0 0.0 0 n/a Buffer Wash Behavior Statistics Not Available - No Buffers Entered Wash Section Yet Cache Strategy Cached (LRU) Buffers 0.9 0.4 260 100.0 % Discarded (MRU) Buffers 0.0 0.0 0 0.0 % Large I/O Usage Total Large I/O Requests 0.0 0.0 0 n/a Large I/O Detail No Large Pool(s) In This Cache Dirty Read Behavior Page Requests 0.0 0.0 0 n/a ------------------------------------------------------------------------------- Cache: tempdb_cache per sec per xact count % of total ------------------------- ------------ ------------ ---------- ---------- Spinlock Contention n/a n/a n/a 12.6 % Utilization n/a n/a n/a 7.0 % Cache Searches Cache Hits 597.1 273.9 179130 99.3 % Found in Wash 0.0 0.0 0 0.0 % Cache Misses 4.4 2.0 1312 0.7 % ------------------------- ------------ ------------ ---------- Total Cache Searches 601.5 275.9 180442 Pool Turnover 4 Kb Pool LRU Buffer Grab 231.4 106.1 69422 99.5 % Grabbed Dirty 0.0 0.0 0 0.0 % 32 Kb Pool LRU Buffer Grab 1.1 0.5 316 0.5 % Grabbed Dirty 0.0 0.0 0 0.0 % ------------------------- ------------ ------------ ---------- Total Cache Turnover 232.5 106.6 69738 Buffer Wash Behavior Statistics Not Available - No Buffers Entered Wash Section Yet Cache Strategy Cached (LRU) Buffers 560.3 257.0 168085 100.0 % Discarded (MRU) Buffers 0.0 0.0 0 0.0 % Large I/O Usage Large I/Os Performed 1.1 0.5 316 100.0 % Large I/Os Denied due to Pool < Prefetch Size 0.0 0.0 0 0.0 % Pages Requested Reside in Another Buffer Pool 0.0 0.0 0 0.0 % ------------------------- ------------ ------------ ---------- Total Large I/O Requests 1.1 0.5 316 Large I/O Detail 32 Kb Pool Pages Cached 8.4 3.9 2528 n/a Pages Used 1.3 0.6 390 15.4 % Dirty Read Behavior Page Requests 0.0 0.0 0 n/a Tuning Recommendations for Data cache : tempdb_cache ------------------------------------- - Consider using Named Caches or Cache partitions or both.
![]() |
0 |
![]() |
> First, the "Run Replacement: relaxed LRU" should be set to "Run Replacement: strict LRU". Thats right, this change should help you in case if you have many dirty pages. Also consider increasing cache size if possible. > Doesn't seem to be any logical reason in using a large I/O with tempdb_cache. It should be removed. It's consuming more than half the cache. LRU Buffer Grab for your 4Kb Pool shows 99.5 % where as for 32 Kb Pool it is 0.5 %, instead of removing 4k pool, giving more memory to 4k and taking away some memory from 32k should help ->I also notice that the "Spinlock Contention is 12.6 %" consider partitiong the cache , if you ahve already partitioned, increase number of partitions -HTH Manish Negandhi [TeamSybase Intern]
![]() |
0 |
![]() |
On 2008-04-11 17:15:16 +1000, Manish Negandhi <negandhi.manish@gmail.com> said: > LRU Buffer Grab for your 4Kb Pool shows 99.5 % where as for 32 Kb > Pool it is 0.5 %, instead of removing > 4k pool, giving more memory to 4k and taking away some memory from 32k > should help When you reduce/drop the 32k pool, the space "moves" to the 4k pool. -- Cheers Derek Senior Sybase DBA / Information Architect Copyright � 2008 Software Gems Pty Ltd Quality Standards = Zero Maintenance + Zero Surprises Performance Standards = Predictability + Scaleability
![]() |
0 |
![]() |
> On 2008-04-11 11:57:35 +1000, Susan Mills said: > New issue, on a different server... "The 4K memory pool of > named cache tempdb_cache (cache id 3, cachelet id 1) is > configured too small for current demands (state 2). > Transaction progress may cease or response time may > increase." > > What would cause this? I believe Derek mentioned in the > other Discussion Thread that these type of errors are > generated when a "create index" or "update stats" is > performed on a large table. Is it safe for us to assume > that this is the cause for this error message as well? I did not quite say that. the error msg happens WHEN you run those types of jobs (which cannnot be avoided). The error msg happens BECAUSE there is not enough space in the 4k pool; this is usually (and in your case, definitely) because the large I/O pool[s] are too large, relative to the 4k pool. What I said in that tthread, and the same is true here, is that the large I/O pool should be reduced and the space moved to the 4k pool. > I'm think I just caught the reason why we may be getting the > "The 4K memory pool of named cache tempdb_cache (cache id 3, > cachelet id 1) is configured too small" message / error. > > Two reasons actually... > > 1) First, the "Run Replacement: relaxed LRU" should be set > to "Run Replacement: strict LRU". No, that was a wild goose chase. Leave it in its default state (even if you change it, it will change back on the next reboot ... for the d d c, log_cache and tempdb_cache). In any case it has very little impact. > 2) Doesn't seem to be any logical reason in using a large > I/O with tempdb_cache. It should be removed. It's > consuming more than half the cache. Correct. Large I/O pools are good in some caches, definitely in the default data cache; they are simply wrong in the tempdb_cache, the log_cache, and the sys_cache. In any case, when you do have a large I/O pool it must be sized correctly. > sp_sysmon output 5 minutes is not a reasonable sample for this purpose, next time get at least one hour. Nevertheless, this one is good enough to make the following conclusions: 1 Confirming the tempdb_cache/32k pool is useless. When large I/Os were denied 100% of the time, they were found in the 4k pool (which one would expect for temp tables). 2 Assuming you have multiple engines, you need to set up cache partitions (there is a good recent thread on that subject). If you do not have a figure that you have calculated yourself, my recommendation is, the largetst power of two, less than or equal to the number of engines. You can always go up (to the next power of two) if necessary, based on sysmons. -- Cheers Derek Senior Sybase DBA / Information Architect Copyright � 2008 Software Gems Pty Ltd Quality Standards = Zero Maintenance + Zero Surprises Performance Standards = Predictability + Scaleability
![]() |
0 |
![]() |
Derek, Thank you for your great feedback. Looks like all our servers are configured in this manner. Unfortunately, they were configured this way before I joined. The person that configured all the servers went back to their home country before I joined, so I wasn't able to ask them what their reason was for this type of design. I'm sure they had good reason or intentions, but don't know what they are. Below seems to be our current environment. - All have 32K I/O pools (including tempdb_cache, except for log_cache) - All have the 32K I/O pool size larger than the 4K pool (40% 4K / 60% 32K space configured). - And all are set with "Run Replacement: relaxed LRU" (not the default "strick LRU"). I'm trying to walk through each server and resolve the memory errors, yet also tune / config appropriately without degrading the environment. So my goal is to follow your advice and do the following... - Ensure that all servers have a minimal of a "default data cache", "tempdb_cache" and a "log_cache". - Ensure that the "tempdb_cache" and the "log_cache" do not have large I/O pools (i.e. 32K pool). - Ensure that all cache are the appropriate size. - Ensure that all of the cache have the default "strick LRU" set (except for the "log_cache"). - Create "Name Cache" for resource hungry tables and indexes (i.e reporting / Datamart tables and indexes). What are your thoughts? Did I cover all bases? :) Thank you > > On 2008-04-11 11:57:35 +1000, Susan Mills said: > > > New issue, on a different server... "The 4K memory pool > > of named cache tempdb_cache (cache id 3, cachelet id 1) > > is configured too small for current demands (state 2). > > Transaction progress may cease or response time may > > increase." > > > > What would cause this? I believe Derek mentioned in the > > other Discussion Thread that these type of errors are > > generated when a "create index" or "update stats" is > > performed on a large table. Is it safe for us to assume > > that this is the cause for this error message as well? > > I did not quite say that. the error msg happens WHEN you > run those types of jobs (which cannnot be avoided). The > error msg happens BECAUSE there is not enough space in > the 4k pool; this is usually (and in your case, > definitely) because the large I/O pool[s] are too large, > relative to the 4k pool. > > What I said in that tthread, and the same is true here, is > that the large I/O pool should be reduced and the space > moved to the 4k pool. > > > I'm think I just caught the reason why we may be getting > > the "The 4K memory pool of named cache tempdb_cache > > (cache id 3, cachelet id 1) is configured too small" > > message / error. > > Two reasons actually... > > > > 1) First, the "Run Replacement: relaxed LRU" should be > > set to "Run Replacement: strict LRU". > > No, that was a wild goose chase. Leave it in its default > state (even if you change it, it will change back on the > next reboot ... for the d d c, log_cache and > tempdb_cache). In any case it has very little impact. > > > 2) Doesn't seem to be any logical reason in using a > > large I/O with tempdb_cache. It should be removed. > > It's consuming more than half the cache. > > Correct. Large I/O pools are good in some caches, > definitely in the default data cache; they are simply > wrong in the tempdb_cache, the log_cache, and the > sys_cache. In any case, when you do have a large I/O > pool it must be sized correctly. > > > sp_sysmon output > > 5 minutes is not a reasonable sample for this purpose, > next time get at least one hour. Nevertheless, this one > is good enough to make the following conclusions: > > 1 Confirming the tempdb_cache/32k pool is useless. When > large I/Os were denied 100% of the time, they were found > in the 4k pool (which one would expect for temp tables). > > 2 Assuming you have multiple engines, you need to set up > cache partitions (there is a good recent thread on that > subject). If you do not have a figure that you have > calculated yourself, my recommendation is, the largetst > power of two, less than or equal to the number of > engines. You can always go up (to the next power of two) > if necessary, based on sysmons. > -- > Cheers > Derek > Senior Sybase DBA / Information Architect > Copyright =a9 2008 Software Gems Pty Ltd > Quality Standards =3d Zero Maintenance + Zero Surprises > Performance Standards =3d Predictability + Scaleability >
![]() |
0 |
![]() |
> On 2008-04-11 20:35:20 +1000, Susan Mills said: > > Looks like all our > servers are configured in this manner. Unfortunately, they > were configured this way before I joined. The person that > configured all the servers went back to their home country > before I joined, so I wasn't able to ask them what their > reason was for this type of design. I trust you appreciate the serious negative impact. To do that across the board for all servers takes serious intention. If the person was actually experienced, then it is sabotage, I would check the circumstances of their departure. If they were actually green (regardless of declaration of experience), it could be forgiven, they were learning on the job, but management should think about why they let greenhorns loose on their production servers. > I'm sure they had good > reason or intentions, but don't know what they are. Nonsense. I have had to do forensic work. Anyone with good intentions checks that their actions matched their intentions. In any case, you need watch for other config/set up that has a negative impact. > - Ensure that the "tempdb_cache" and the "log_cache" do not > have large I/O pools (i.e. 32K pool). For caches that do have a large I/O pool, eg default data cache, it should be (for a good starting point from which to proceed as per sp_sysmons) 10% of the cache size. Then decrease/increase as nec. > - Ensure that all of the cache have the default "strict LRU" > set (except for the "log_cache"). I have already posted my views on that. It makes a very small difference in your context. You can leave it at "relaxed" for now, change in the future as you gain understanding and experience. > - Create "Name Cache" for resource hungry tables and indexes > (i.e reporting / Datamart tables and indexes). If you do that then is must be sized on the basis of a containment strategy, not the full table[s] size. > Did I cover all bases? Just a few more. 1 The above would be a good first iteration of a cache set up. As part of my standard config, at this first level, I include an NCI_cache and bind all non-clustered indices to it. It must be carefully sized; it does need a large I/O pool; from what I have seen of your servers I would recommend a 16k pool, not a 32k pool. 2 We have'nt discussed it but the main objective of cache set up, is to make these small named caches, either exactly sized (tempdb, log, NCI) or allocated as a thoughtful small block for containment purposes (one or more caches for your "hungry" tables); leaving the bulk of memory for the default data cache (which is needed for all heaps and clustered indices). Also very important for recovery. You do not want to end up with many large named caches and a small d d c. 3 Cache partitions as per prev post. -- Cheers Derek Senior Sybase DBA / Information Architect Copyright � 2008 Software Gems Pty Ltd Quality Standards = Zero Maintenance + Zero Surprises Performance Standards = Predictability + Scaleability
![]() |
0 |
![]() |
"Derek Asirvadem" <derek.asirvadem@gmailDOTcom> wrote in message news:47ff3226@forums-1-dub... >> On 2008-04-11 11:57:35 +1000, Susan Mills said: > >> 2) Doesn't seem to be any logical reason in using a large >> I/O with tempdb_cache. It should be removed. It's >> consuming more than half the cache. > > Correct. Large I/O pools are good in some caches, definitely in the > default data cache; they are simply wrong in the tempdb_cache, the > log_cache, and the sys_cache. In any case, when you do have a large I/O > pool it must be sized correctly. ??? I had never heard of this kind of advice before, of stating that Large IO pools are "simply wrong" for tempdb purposes. Would you care to expand on that? Or am I missing something obvious here?
![]() |
0 |
![]() |
"Sherlock, Kevin" <kevin.sherlock@teamsybase.com> wrote in message news:47ffa39c@forums-1-dub... > > "Derek Asirvadem" <derek.asirvadem@gmailDOTcom> wrote in message > news:47ff3226@forums-1-dub... >>> On 2008-04-11 11:57:35 +1000, Susan Mills said: >> >>> 2) Doesn't seem to be any logical reason in using a large >>> I/O with tempdb_cache. It should be removed. It's >>> consuming more than half the cache. >> >> Correct. Large I/O pools are good in some caches, definitely in the >> default data cache; they are simply wrong in the tempdb_cache, the >> log_cache, and the sys_cache. In any case, when you do have a large I/O >> pool it must be sized correctly. > > ??? I had never heard of this kind of advice before, of stating that Large > IO pools are "simply wrong" for tempdb purposes. Would you care to expand > on that? Or am I missing something obvious here? I agree with Kevin. In fact, I'd like to point out that large buffer pools are useful in tempdb, especially since 12.5.0.3: as of that version writes can be done with large buffers (which was previously not allowed in tempdb and large buffers would be used only for reading). This means that things like select-into could benefit from having large buffer pools in tempdb. Rob V.
![]() |
0 |
![]() |
> "Sherlock, Kevin" <kevin.sherlock@teamsybase.com> wrote in message > news:47ffa39c@forums-1-dub... > >> "Derek Asirvadem" <derek.asirvadem@gmailDOTcom> wrote in message >> news:47ff3226@forums-1-dub... >> >> Large I/O pools are good in some caches, definitely in the default data >> cache; they are simply wrong in the tempdb_cache, the log_cache, and >> the sys_cache. In any case, when you do have a large I/O pool it must >> be sized correctly. > > ??? I had never heard of this kind of advice before, of stating that > Large IO pools are "simply wrong" for tempdb purposes. Would you care > to expand on that? Or am I missing something obvious here? (Forgive the delay in my response, I had an intense week.) 1 Ok, so we are discussing this as generic advice, not particular to Susan's case ( which was an error). My advice comes from experience of P&T at many sites, not what the book says, or what the gurus say, or what should (logically) be a good idea. These sites are large and varied, but tempdb is always thrashed and benefits from tempdb tuning & tempdb_cache tuning. Logically a large I/O pool for tempdb may sound like a good idea, but in my exp it is not a good idea in common use (it may well be a good idea in some cases). Besides, one should not just walk up to ASE, whack in a named cache for tempdb and whack in a proportionate large I/O pool; one should get an idea of what activity tempdb is used for, etc. Plus the sizing is important, ie how much tempdb, or how much table for single massive queries do you want to get into the cache. The generic steps that always result in a performance gain on tempdb usage are: • use a 4k pool (over a 2k pool) [but Susan already has a 4k pagesize, and thus a 4k smallest poolsize] • do not bother with large I/O pool, it does not get used that much at all; or the usage vs the 2k/4k pool is so low that it is not relevant to justify itself. Watch for large I/Os denied due to being found in another pool. In my experience, even the index creation does not use it. In three sites last year I removed the Lareg I/O pools in tempdb_cache and that increased overall tempdb performance (because the space was released to the 4k pool). Anyone wishing to verify the facts only needs to create a large I/O pool in tempdb_cache; watch its actual (not predicted) usage and see for themselves; or remove it if they already have one and look at the difference. There is no use having an academic discussion when one argument relates to actual experience. 2 I trust you agree that large I/O pools are simply wrong for the log_cache & sys_cache. I expect you will agree that tempdb cache demands are mostly the creation of small packets (which may later add up to a large table, but that is later). Therefore the creation (grab a buffer and write) similarities are the same. > On 2008-04-12 05:19:33 +1000, "Rob Verschoor" > <rob@DO.NOT.SPAM.sypron.nl.REMOVE.THIS.DECOY> said: > > > I agree with Kevin. > In fact, I'd like to point out that large buffer pools are useful in > tempdb, especially since 12.5.0.3: as of that version writes can be > done with large buffers (which was previously not allowed in tempdb and > large buffers would be used only for reading). This means that things > like select-into could benefit from having large buffer pools in tempdb. As stated, try it for yourself on a production system, I am not going to get into academic discussions. Everyone was waiting for the "Lazy Write" feature which promised relief in many specific circumsatnces (refer the number of posts where "... that will be fixed with the Lazy Write"), when it came, it did nothing (produced no actual benefit) for the systems I worked on (all post 12.5.4), which admittedly had tuned tempdbs and tempdb_caches. I am not saying the feature dioes nothing for untuned systems. Similarly, what relevance (when would anyone actually use) is writing from the large I/O pool of tempdb_cache ? You have to get ASE to read into it first. Why would anyone select-into (from tempdb) to tempdb ? It is like updating tempdb, I am with Jeff Tallman on that one. (Creation of the correct joined temp table eliminates that.) -- Cheers Derek Senior Sybase DBA / Information Architect Copyright © 2008 Software Gems Pty Ltd Quality Standards = Zero Maintenance + Zero Surprises Performance Standards = Predictability + Scaleability
![]() |
0 |
![]() |
On 2008-04-25 06:06:47 +1000, Elohssa Nasi Kered said: >> Why would anyone select-into (from tempdb) to tempdb ? > > Are you pounding the pud when you read your own posts? I do not understand that slang, sorry. It is a technical issue, follow the thread, if you do not understand ... >> It is like updating tempdb, I am with Jeff Tallman on that one. >> (Creation of the correct joined temp table eliminates that.) .... maybe post a question. Upon second reading of my post, it occurs to me that it may look like I am disagreeing with large I/O pools in tempdb_cache, period. I am not. As detailed in my reponse to Kevin's & Rob's "please explain" I have stated a config which is the result of several P&T iterations, NOT the simpler and more common form of tuning the tempdb cache, which is to just add a (eg) 16k pool, to the existing 2k pool. In my experience that is always a large improvement. I am not speaking to that config. In summary what I am stating for tempdb_cache is, a 4k pool is much better than a 2k pool, once you have that a larger poolsize is not nec. And that is better than a 2k+16k config (which is the generic advice). Susan's config is total and complete error (4k/600mb of 1gb is utilised 0.5%; 32k/400mb is utilised 99.5%), it is not exemplary, but it does provide an already posted indication, if you understand my logic. Note that, as stated, I am comparing overall tempdb performance, not just utilisation of the two (or more) poolsizes in tempdb_cache, and the context is several other structures to improve tempdb performance (not just a large I/O pool). As always with P&T YMMV. Try out the various combinations yourself and post your results. -- Cheers Derek Senior Sybase DBA / Information Architect Copyright � 2008 Software Gems Pty Ltd Quality Standards = Zero Maintenance + Zero Surprises Performance Standards = Predictability + Scaleability
![]() |
0 |
![]() |
Derek, I stated in my reply that I don't agree with the post and it will removed. I don't like personal comments such as that, it just made me laugh (it's not a phrase I've heard for years). As my other post was removed for no apparent reason and with no explanation none of this hardly makes a difference anyway. Incidentally, the e-mail address behind my posts is a genuine one and it is owned by me so if a moderator would care to e-mail me to tell me why my post (not the one this is about as we know) was removed I would appreciate it. I have nothing against you at all Derek, you are clearly an exceptional guy but my original post was intended as constructive, I just wish you could of taken it that way and on the metaphorical 'chin'. "Derek Asirvadem" <derek.asirvadem@gmailDOTcom> wrote in message news:4811539b@forums-1-dub... >> On 2008-04-25 06:06:47 +1000, Elohssa Nasi Kered said: > >> On 2008-04-25 06:37:26 +1000, "Be positive" <macallan30@sky.com> said: > > You guys are defining yourselves, your character, the exact level of > maturity and professionalism, by the way you have a laugh at other > people's expense. > > And this from the person who seeks team play, magnanimity, sensitivity to > others' feelings. Evidently you have never heard of hypocrisy, meanwhile > your every action proves you to be one. That, is a very sad; and > certainly hilarious; but it is not a joke. It is evidence of damage at > the soul level. > > I suppose you will go back to ringing door bells and running away. > Hopefully you will ring Elohssa's, and he will ring yours, and leave the > rest of us to get on with our jobs. > > Why do we keep on allowing transient unidentified flying objects to use > this as their personal blogsite ? > -- > Cheers > Derek > Senior Sybase DBA / Information Architect > Copyright � 2008 Software Gems Pty Ltd > Quality Standards = Zero Maintenance + Zero Surprises > Performance Standards = Predictability + Scaleability >
![]() |
0 |
![]() |
> On 2008-04-25 12:51:05 +1000, Derek Asirvadem > <derek.asirvadem@gmailDOTcom> said: Please delete the above post (mine). I will post the technical response in a more appropriate sub-thread. -- Cheers Derek Senior Sybase DBA / Information Architect Copyright � 2008 Software Gems Pty Ltd Quality Standards = Zero Maintenance + Zero Surprises Performance Standards = Predictability + Scaleability
![]() |
0 |
![]() |
Upon second reading of my post, it occured to me that it may look like I am disagreeing with large I/O pools in tempdb_cache, period. I am not. As detailed in my reponse to Kevin's & Rob's "please explain" I have stated a config which is the result of several P&T iterations, NOT the simpler and more common form of tuning the tempdb cache, which is to just add a (eg) 16k pool, to the existing 2k pool. In my experience that is always a large improvement. I am not speaking to that config, I am speaking to the previously posted config. In summary what I am stating for tempdb_cache is, a 4k pool is much better than a 2k pool; once you have that a larger poolsize is not nec. And that that is better than a 2k+16k config (which is the generic advice). Susan's config is total and complete error (4k/600mb of 1gb is utilised 99.5%; 32k/400mb is utilised 0.5%), it is not exemplary, but it DOES provide an indication, if you understand my logic, in an already posted example. Note that, as stated, I am comparing overall tempdb performance, not just utilisation of the two (or more) poolsizes in tempdb_cache, and the context is several other structures to improve tempdb performance (not just a large I/O pool). As always with P&T YMMV. Try out the various combinations yourself and post your results. >> Why would anyone select-into (from tempdb) to tempdb ? If you do not understand why I think that is an error ... > It is like updating tempdb, I am with Jeff Tallman on that one. > (Creation of the correct joined temp table eliminates that.) .... or you cannot find the referred threads, maybe post a question. -- Cheers Derek Senior Sybase DBA / Information Architect Copyright � 2008 Software Gems Pty Ltd Quality Standards = Zero Maintenance + Zero Surprises Performance Standards = Predictability + Scaleability Everything I state is In My Humble Opinion
![]() |
0 |
![]() |