Topic: Statistics repairing: a philosophical question
Let's assume, we in storage have requests which work unstable. The analysis showed that a problem in statistican under several tables which was not updated for a long time.
The decision looks logical: to collect the actual statistics. The analysis shows that it these plans.
Problem in that the code is overflowed , on basis a heap of profiles and (even automatic capture is included). It means that theoretically we to ours can refine a row of plans, but thus we can break a row of others among which there can be more critical. We break certainly correctly, but nevertheless, it would not be desirable night calls.
Whether there is a method to check up as our patch affects the general performance of system before its application?
Variant: to check up on UAT-bases certainly it is good. The problem that as you try to make UAT completely on 100 % equivalent it does not turn out (for the administrative reasons in particular), and in performance things important there can be any detail. What variants are still? How you arrive in similar cases?
Let's assume, we derive from AWR all a little loaded requests which are connected to these tables. How to check up, what these requests begin to work not worse in all possible situations? To launch them is not so a variant since this storage and some of them work as hours. Whether it is possible to make some estimate of consumption of resources for request without launching it? What resources are necessary for considering thus? Whether there are tools including standard for similar things?