Rules of thumb are never advised by me but some can be taken as part of check list one by one while tunning I/O:
rule 2: maximize cached I/O
rule 3: minimize I/O contention.
how to cut I/O:
1. cut unnecessary fetch. Be restrictive about columns in selected list. Make sure all columns fetched in explicit/implicit cursor are used some where in code. try take benefit of 'fast full index scan' .
2. check usefullness of indexed columns. They may be slowing DMLs and not yielding any query performance gain at all. So identify such indexes and drop.
3. avoid triggers which performs lot of queries/transactions and auditing from inside - these may actually be slowing DMLs especially when dmls in bulks are issued .
4. check all tables/indexes have appropriate values ser for PCTFREE and PCTUSED .
PCTFREE has default 10% so you may be wasting not only 10% extra disk/cache memory but also causing more I/O for objecting not undergoing future updates.
similarly setting PCTUSED quite higher means taking block more frequent on/off from free list.
trick: setting PCTFREE higher can reduce hot block contention.
5. If CPU resources are available some tables can be compressed. this will not ony minimize the I/O at the expense of CPU but also meets the objective "maximize cache" - how ? Because table now needs less buffers, you have more free buffers where other objects can be assigned. This is very useful in case when there is no shortage of CPU but scarcity of memory is.
Remember 10g has compress feature for only CTAS and insert into select queries. It is 11g with which comes OLTP table compression.
6. If using materialized views for replication or reporting then, try their refresh possible by FAST method. And if using FULL mechanism , think twice. What about rfresh by truncate/insert.
7. Optimize query execution plan. It is subjective topic in itself.
- tables are indexed appropriately and indexes have good selectivity. If index is not unique it will be good to have indexes with low clustering factor.
- check all tables have accurate statistics and statistics must have been gathered when tables had representative data.
-You need check instance optimizer parameters have been set correctly. If it is RAC instances they should have same values on all instances.
- Trade off of saving CPU versus good execution plan.
cursor_sharing =exact may be far better than cursor_sharing=similar or cursor_sharing=force
Maximize cached I/O
1. explore if you need configure KEEP and RECYCEL pools in your database for frequently accessed(small in size) and least accessed(bigger) tables and the set and size them appropriately. Assign the related objects to these pools.
2. set the buffer cache appropriately enough high to minimize physical reads.
3. if using bigger SGA > 16GB, in linux use huge pages memory.
4. set the PGA_AGGREGATE_TARGET appropriately. remember higher value for PGA can favour sort merge join over nested loop join.
Minimize I/O contention:
Balance the I/O across multiple disks array if possible.
take care of all I/O source redo logs, undo tablespace, temporay tablespaces , index tablespaces and DATA tablespaces and archive log too if DB is running in archivelog mode. SPREAD these across Disks, depending on their concurrent usage. you can check statspack/AWR report for I/O usage on tablespace/datafile wise.