Bài giảng 10g RAC Best Practices

pdf 64 trang huongle 7570
Bạn đang xem 20 trang mẫu của tài liệu "Bài giảng 10g RAC Best Practices", để tải tài liệu gốc về máy bạn click vào nút DOWNLOAD ở trên

Tài liệu đính kèm:

  • pdfbai_giang_10g_rac_best_practices.pdf

Nội dung text: Bài giảng 10g RAC Best Practices

  1. 10g RAC Best Practices Kirk McGowan Technical Director – RAC Pack Server Technologies Oracle Corporation
  2. Disclaimer These Best Practices are based on customer experiences, and they will generally give the best results. However, systems have different requirements and cost structures, so these Best Practices might not be applicable in all cases. As technology evolves and with new experiences, these Best Practices will probably change over time. These Best Practices do not replace the standard product documentation which is the official guide to product use.
  3. Agenda y Planning Best Practices – Understand and Plan the Architecture – Manage Expectations – Define objectives and success criteria – Project plan y Implementation Best Practices – Infrastructure considerations – Installation/configuration – Database creation – Application considerations y Operational Best Practices – Backup & Recovery – Performance Monitoring and Tuning – Production Migration
  4. Planning y Understand the Architecture – Cluster terminology – Functional basics y HA by eliminating node & Oracle as SPOFs y Scalability by making additional processing capacity available incrementally – Hardware components y Private interconnect/network switch y Shared storage/concurrent access/storage switch – Software components y OS, Cluster Manager, DBMS/RAC, Application y Differences between cluster managers
  5. RAC Hardware Architecture Centralized Management Console Network Users High Speed Switch or Interconnect Low Latency Interconnect Clustered ie. GigE or Proprietary No Single Database Point Of Failure Servers Shared Cache Hub or Switch Fabric Mirrored Disk Subsystem Storage Area Network
  6. RAC Software Architecture Shared Data Model GES&GCS GES&GCS GES&GCS GES&GCS Shared Memory/Global Area Shared Memory/Global Area Shared Memory/Global Area Shared Memory/Global Area . . . . shared log shared log shared log shared log SQL buffer SQL buffer SQL buffer SQL buffer Shared Disk Database
  7. 10g Technology Architecture public network VIP1 VIP2 VIP3 Node1 Node2 Node3 Database Database Database instance 1 cluster instance 2 cluster instance 3 interconnect interconnect . . . ASM Instance 1 ASM Instance 2 ASM Instance 3 cache to CRS cache CRS CRS Operating System Operating System Operating System shared storage concurrent redo logs all instances access from Database files more nodes every node = control files = higher “scale out” availability OCR and Voting Disk
  8. Plan the Architecture y Eliminate SPOFs – Cluster interconnect redundancy (NIC bonding/teaming, ) – Implement multiple access paths to the storage array using 2 or more HBA’s or initiators y Investigate multi-pathing sw over these multiple devices to provide load balancing and failover. y Processing nodes – sufficient CPU to accommodate failure y Scalable I/O Subsystem – Scalable as you add nodes y Workload Distribution (load balancing) strategy – Net Services (SQL*Net) – Oracle10g Services y Establish management infrastructure to manage to Service Level Agreements – Grid Control
  9. Cluster Hardware Considerations y Cluster interconnects – FastEthernet, Gigabit Ethernet, Proprietary interconnects (SCI, Hyperfabric, memory channel, ) – Dual interconnects, stick with GigE/UDP y Public networks – Ethernet, FastEthernet, Gigabit Ethernet y Server Recommendations – Minimum 2 CPUs per server – 2 and 4 CPU servers normally most cost effective – 1-2 GB of memory per CPU – Dual IO Paths y Intelligent storage, or JBOD y Fiber Channel, SCSI, iSCSI or NAS storage connectivity y Future: Infiniband
  10. Plan the Architecture y Shared storage considerations (ASM, CFS, shared raw devices) y Use S.A.M.E for shared storage layout – y Local ORACLE_HOME versus shared ORACLE_HOME y Separate HOMEs for CRS, ASM, RDBMS y OCR and Voting Disk on raw devices – Unless using CFS
  11. RAC Technology Certification y For more details on software certification and compatible hardware: – y Discuss hardware configuration with your HW vendor y Try to stick to standard components that have been properly tested/certified
  12. Set Expectations Appropriately y If your application will scale transparently on SMP, then it is realistic to expect it to scale well on RAC, without having to make any changes to the application code. y RAC eliminates the database instance, and the node itself, as a single point of failure, and ensures database integrity in the case of such failures
  13. Planning: Define Objectives y Objectives need to be quantified/measurable – HA objectives y Planned vs. unplanned y Technology failures vs. site failures vs. human errors – Scalability Objectives y Speedup vs. scaleup y Response time, throughput, other measurements – Server/Consolidation Objectives y Often tied to TCO y Often subjective
  14. Build your Project Plan y Partner with your vendors – Multiple stakeholders, shared success y Build detailed test plans – Confirm application scalability on SMP before going to RAC ẻ optimize first for single instance y Address knowledge gaps and training – Clusters, RAC, HA, Scalability, systems management – Leverage external resources as required y Establish strict System and Application Change control – Apply changes to one system element at a time – Apply changes first to test environment – Monitor impact of application changes on underlying system components y Define Support mechanisms and escalation procedures – Including dedicated, long term, test cluster
  15. Agenda y Planning Best Practices – Architecture – Expectation setting – Objectives and success criteria – Project plan y Implementation Best Practices – Installation/configuration – Database creation – Application considerations y Operational Best Practices – Backup & Recovery – Performance Monitoring and Tuning – Production Migration
  16. Implementation Flowchart Install Oracle Software, Configure HW including RAC and ASM Configure OS, Run VIPCA, automatically Public Network, launched from RDBMS Private interconnect root.sh Configure Create database with DBCA Shared storage Validate cluster/RAC Install Oracle CRS configuration
  17. Operating System Configuration y Confirm OS requirements from – Platform-specific install documentation – Quick install guides (if available) from Metalink/OTN – Release notes y Follow these steps on EACH node of the cluster – Configure ssh y 10g OUI uses ssh, not rsh – Configure Private Interconnect y Use UDP and GigE y Non-routable IP addresses (eg 10.0.0.x) y Redundant switches as std configuration for ALL cluster sizes. y NIC teaming configuration (platform dependant) – Configure Public Network y VIP and name must be DNS-registered in addition to the standard static IP information y Will not be visible until VIPCA install is complete
  18. NIC Bonding y Required for private interconnect resiliency. y Various 3rd party vendor solutions available: – Linux y NIC bonding in RHEL 3.0 ES / nux-2.4/Documentation/networking/bonding.txt y Intelđ Advanced Network Services (ANS) x/ans.htm y HANIC
  19. NIC Bonding cont. y Solaris – IPMP: netmultipath/index.html y HP – Auto Port Aggregation (HPUX): pa_overview.html – (Tru64): y AIX – Etherchannel: atsmastr.nsf/WebIndex/TD101260 y Windows
  20. Shared Storage Configuration y Configure devices for the Voting Disk and OCR file. – Voting Disk >= 20MB, OCR >= 100MB. – Use storage mirroring to protect these devices y Configure shared Storage (for ASM) – Use large number of similarly sized “disks” – Confirm shared access to storage “disks” from all nodes – Use storage mirroring if available – Include space for flash recovery area y Configure IO Multi-pathing – ASM must only see a single (virtual) path to the storage – Multi-pathing configuration is platform specific (e.g. Powerpath, SecurePath, ) y Establish file system or location for ORACLE_HOME – (And CRS & ASM HOME)
  21. Installation Flowchart CRS Create two raw devices for OCR and voting disk Install CRS/CSS stack with Oracle Universal Installer Start the Oracle stack the first time with $CRS_HOME/root.sh Load/Install hangcheck timer (Linux only)
  22. Oracle Cluster Manager (CRS) Installation y CRS is REQUIRED to be installed and running prior to installing 10g RAC. y CRS must be installed in a different location from the ORACLE_HOME, (e.g. ORA_CRS_HOME). y Shared Location(s) or devices for the Voting File and OCR file must be available PRIOR to installing CRS. – Reinstallation of CRS requires re-initialization of devices, including permissions. y CRS and RAC require that the private and public network interfaces be configured prior to installing CRS or RAC y Specify virtual interconnect for CRS communication
  23. CRS Installation cont. y Only one set of CRS daemons can be running per RAC node. y On Unix, the CRS stack is run from entries in /etc/inittab with ‘respawn’. y The supported method to start CRS is booting the machine y The supported method to stop is shutdown the machine or use "init.crs stop".
  24. Installation Flowchart Oracle Install Oracle Software DBCA Verify cluster and Root.sh on all nodes database configuration Define VIP (VIPCA) NETCA
  25. Oracle Installation y The Oracle 10g Installation can be performed after CRS is installed and running on all nodes. y Start the runInstaller (do not cd in your /mnt/cdrom directory) y Run root.sh on all nodes – Running root.sh on the first node will invoke VIPCA who will configure your Virtual IP‘s on all nodes – After root.sh is finished on the first node start this one after each other on the remaining nodes.
  26. VIP Installation y The VIP Configuration Assistant (vipca) starts automatically from $ORACLE_HOME/root.sh y After the welcome screen you have to choose only the public interfaces(s) y The next screen will ask you for the Virtual IPs for cluster nodes, add your /etc/hosts defined name under IP Alias Name. – The VIP must be a DNS known IP address because we use the VIP for the tnsnames connect. y After finishing this you will see a new VIP interface eg: eth0:1. Use ifconfig (on most platforms) to verify this.
  27. VIP Installation cont. y If a cluster is moving to a new datacenter (or subnet) it is necessary to change IPs. The VIP is stored within the OCR and any modification or change to the IP requires additional administrative steps – Please see Metalink Note:276434.1 for details
  28. NETCA Best Practices? y Configure Listeners to listen on the VIP, not on the hostname y Server side Load balancing configuration recommendations? y FaN/FCF configuration recommendations? y Client-side load balancing? y SQL*Net parameters?? Recv-timeout, send- timeout?
  29. Create RAC database using DBCA y Set MAXINSTANCES, MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES (auto with DBCA) y Create tablespaces as locally Managed (auto with DBCA) y Create all tablespaces with ASSM (auto with DBCA) y Configure automatic UNDO management (auto with DBCA) y Use SPFILE instead of multiple init.ora’s (auto with DBCA)
  30. ASM Disk(group) Best Practices y ASM configuration performed initially as part of DBCA y Generally create 2 diskgroups. – database area – flash recovery area y Size dependant on what is stored, and retention period y Physically separate the database and flashback areas, making sure the two areas do not share the same physical spindles. y Use diskgroups with large number of similarly sized disks. y When performing mount operations on diskgroups, it is advisable to mount all required diskgroups at once. y Make sure disks span several backend disk adapters. y If mirroring is done in the storage array, set REDUNDANCY=EXTERNAL y Where possible, use the pseudo devices (multi-path IO) as the diskstring for ASM.
  31. ASM File Best Practices y Use OMF with ASM y Set db_create_file_dest=+group1 y Create tablespace books; – select a.name, f.bytes from v$asm_alias a, v$asm_file f where f.file_number=a.file_number; NAME BYTES Books.256.1 104857600
  32. ASM File Best Practices y Use User Templates when necessary. y User or System templates can be specified in ASM file names for creation y In ASM instance – Alter diskgroup group1 add template fine attributes (fine unprot); y In DB instance – create tablespace tb1 datafile ‘+group1/tb1(fine)’ size 100M;
  33. Validate Cluster Configuration y Query OCR to confirm status of all defined services: crsstat –t y Use script from Note 259301.1 to improve output formatting/readability HA Resource Target State ora.BCRK.BCRK1.inst ONLINE ONLINE on sunblade-25 ora.BCRK.BCRK2.inst ONLINE ONLINE on sunblade-26 ora.BCRK.db ONLINE ONLINE on sunblade-25 ora.sunblade-25.ASM1.asm ONLINE ONLINE on sunblade-25 ora.sunblade-25.LISTENER_SUNBLADE-25.lsnr ONLINE ONLINE onsunblade-25 ora.sunblade-25.gsd ONLINE ONLINE on sunblade-25 ora.sunblade-25.ons ONLINE ONLINE on sunblade-25 ora.sunblade-25.vip ONLINE ONLINE on sunblade-25 ora.sunblade-26.ASM2.asm ONLINE ONLINE on sunblade-26 ora.sunblade-26.LISTENER_SUNBLADE-26.lsnr ONLINE ONLINE on sunblade-26 ora.sunblade-26.gsd ONLINE ONLINE on sunblade-26 ora.sunblade-26.ons ONLINE ONLINE on sunblade-26 ora.sunblade-26.vip ONLINE ONLINE on sunblade-26
  34. Validate RAC Configuration y Instances running on all nodes SQL> select * from gv$instance y RAC communicating over the private Interconnect SQL> oradebug setmypid SQL> oradebug ipc SQL> oradebug tracefile_name /home/oracle/admin/RAC92_1/udump/rac92_1_ora_1343841.trc – Check trace file in the user_dump_dest: SSKGXPT 0x2ab25bc flags info for network 0 socket no 10 IP 10.0.0.1 UDP 49197 sflags SSKGXPT_UP info for network 1 socket no 0 IP 0.0.0.0 UDP 0 sflags SSKGXPT_DOWN
  35. Validate RAC Configuration y RAC is using desired IPC protocol: Check Alert.log cluster interconnect IPC version:Oracle UDP/IP IPC Vendor 1 proto 2 Version 1.0 PMON started with pid=2 y Use cluster_interconnects only if necessary – RAC will use the same “virtual” interconnect selected during CRS install – To check which interconnect and is used and where it came from use “select * from x$ksxpia;” ADDR INDX INST_ID P PICK NAME_KSXPIA IP_KSXPIA - 00000003936B8580 0 1 OCR eth1 10.0.0.1 Pick: OCR Oracle Clusterware OSD Operating System dependent CI indicates that the init.ora parameter cluster_interconnects is specified
  36. Post Installation y Enable asynchronous I/O if available – cd $ORACLE_HOME/rdbms/lib; make -f ins_rdbms.mk async_on ioracle y Adjust UDP send / receive buffer size to 256K (Linux only) y If Buffer Cache > 1.7GB required, use 64-bit platform.
  37. Optimize Instance Recovery y Set fast_start_mttr_target – 60 < fsmttr < 300 is a good starting point – Balance of performance vs. availability y Size the buffer cache for single pass recovery. y Ensure asynch I/O is used. y Follow configuration best practices as documented in Oracleđ High Availability Architecture and Best Practices 10g Release 1 (10.1)
  38. SRVCTL y SRVCTL is a very powerful tool y SRVCTL uses information from the OCR file y GSD in 10g is running just for compatibility to serve 9i clients if 9i and 10g is running on the same cluster. y srvctl status nodeapps -n will show all services running on a node – SRVCTL commands are documented in Appendix B of the RAC Admin Guide at: /rac.101/b10765/toc.htm
  39. Application Considerations FCF vs. TAF y Connection Retries: – FCF allows retry at the Application level, TAF retries occur at the OCI/Net layer. Application layer (Example: EJB Container) fully controls retries y Integrated with the Connection Cache: – FCF works in conjunction with the Implicit Connection Cache, and has complete control over connections managed by the cache y RAC Events Based: – FCF is a RAC event based mechanism. This is more efficient than detecting failures of network calls y Load Balancing Support: – FCF supports UP event Load Balancing of connections across active RAC instances – start and UP – Work requests are distributed across RAC
  40. Applications Waste Time Connect SQL issue Blocked in R/W Processing last result active active wait wait tcp_ip_cinterval tcp_ip_interval tcp_ip_keepalive - VIP VIP out of band event - out of band event - FAN FAN
  41. What is FaN? y Fast Application Notification (FaN) is RAC HA notification mechanism which let applications know about service & node events (UP or DOWN events) y Fast Connection Failover (FCF) is a mechanism of 10g JDBC which uses FAN y Enable it, and Forget it. Works transparently by receiving asynchronous events from the RAC database
  42. How does Fast Connection Failover use FAN? y FCF is a subscriber of FAN, where – instance UP event - leverages FaN to load balance connections across the existing and new instances – node/instance DOWN event - cleans up the connection cache (remove invalid connections) y iAS 10.1.3 will integrate JDBC 10g y Query/Operations retries are up to the application/containers not FCF.
  43. What is a Service? y In Oracle10g services are built into the database. y Divides work into logical workloads which share common functions, service level thresholds, priority & resource needs. y Examples: – OLTP & Batch – ERP, CRM, HR, Email – DW & OLTP – Affinity Group 1,2,3,4,5,6,7,8,9,10
  44. Take Advantage of 10g Services y Easy to setup, configure then connect by service y Benefits – Availability y Services has a defined Topology & automatic recovery y Callouts as services come up and down – Performance y A new level for performance tuning y Workload are routed transparently y Alerts & actions when performance goals are violated y Natural support for mixed workloads and mixed size nodes – Manageability y Each workload is managed in isolation y Prioritization & Resource Management
  45. Services in Enterprise Manager Critical Tool for Performance Tuning More details on Services provided in a separate Web Seminar
  46. Application Considerations Configuration y Plan your services – application to service, data range to service – global name, HA configuration, priority, response time y Use service: not SID, not Instance, not Host – Use service to connect – Use virtual IP for database access – Use cluster alias to eliminate address lists. y Use service for jobs and PQ.
  47. Application Considerations Runtime y Make applications measurable – instrument with MODULE and ACTION – use the DBMS_MONITOR to gather statistics y For priorities – use resource manger y For load balancing – use CLB to balance connections by service. – use service metrics to “deal requests” from mid- tier connection pools by service.
  48. Application Considerations Recovery y Use JDBC connection pools for fast failover. – Surviving sessions continue FAST. – Interrupted sessions detect the error FAST. y Use TAF callbacks to trap and handle errors. y Use HA callouts/events (up, down, not restarting) to notify the application to take appropriate action. – Save and recall non-transactional state. – Check transaction outcome and resubmit.
  49. Application Deployment y Same guidelines as single instance – SQL Tuning – Sequence Caching – Partition large objects – Use different block sizes – Tune instance recovery – Avoid DDL – Use LMT’s and ASSM
  50. Agenda y Planning Best Practices – Architecture – Expectation setting – Objectives and success criteria – Project plan y Implementation Best Practices – Infrastructure considerations – Installation/configuration – Database creation – Application considerations y Operational Best Practices – Backup & Recovery – Performance Monitoring and Tuning – Production Migration
  51. Operations y Same DBA procedures as single instance, with some minor, mostly mechanical differences. y Managing the Oracle environment – Starting/stopping Oracle cluster stack with boot/reboot server – Managing multiple redo log threads y Startup and shutdown of the database – Use Grid Control y Backup and recovery y Performance Monitoring and Tuning y Production migration
  52. Operations: Backup & Recovery y RMAN is the most efficient option for Backup & Recovery – Managing the snapshot control file location. – Managing the control file autobackup feature. – Managing archived logs in RAC – choose proper archiving scheme. – Node Affinity Awareness y RMAN and Oracle Net in RAC apply – you cannot specify a net service name that uses Oracle Net features to distribute RMAN connections to more than one instance. y Oracle Enterprise Manager – GUI interface to Recovery Manager
  53. Backup & Recovery y Use RMAN – Only option to backup and restore ASM files y Use Grid Control – GUI interface to RMAN y Use 10g Flash Recovery Area for backups and archive logs – On ASM and available to all instances
  54. Performance Monitoring and Tuning y Tune first for single instance 10g y Use ADDM and AWR y Oracle Performance Manager y RAC-specific views y Supplement with scripts/tracing – Monitor V$SESSION_WAIT to see which blocks are involved in wait events – Trace events like 10046/8 can provide additional wait event details – Monitor Alert logs and trace files, as on single instance y Supplement with System-level monitoring – CPU utilization never 100% – I/O service times never > acceptable thresholds – CPU run queues at optimal levels y Note that in 10g, performance statistics are message/time based, as opposed to event-based in 9i
  55. Performance Monitoring and Tuning y Obvious application deficiency on a single node can’t be solved by multiple nodes. – Single points of contention. – Not scalable on SMP – I/O bound on single instance DB y Tuning on single instance DB to ensure applications scalable first – Identify/tune contention using v$segment_statistics to identify objects involved – Concentrate on the top wait events if majority of time is spent waiting – Concentrate on bad SQL if CPU bound y Maintain a balanced load on underlying systems (DB, OS, storage subsystem, etc. ) – Excessive load on individual components can invoke aberrant behaviour.
  56. Performance Monitoring / Tuning y Deciding if RAC is the performance bottleneck – “Cluster” wait event class – Amount of Cross Instance Traffic y Type of requests y Type of blocks – Latency y Block receive time y buffer size factor y bandwidth factor
  57. Avoid false node evictions y May get ‘heart beat’ failures if critical processes are unable to respond quickly – Enable real time priority for LMS – Do not run system at 100% CPU over long period – Ensure good I/O response times for control file and voting disk
  58. Production Migration y Adhere to strong Systems Life Cycle Disciplines – Comprehensive test plans (functional and stress) – Rehearsed production migration plan – Change Control y Separate environments for Dev, Test, QA/UAT, Production y System AND application change control y Log changes to spfile – Backup and recovery procedures – Patchset maintenance – Security controls – Support Procedures
  59. QQ UU EE SS TT II OO NN SS AA NN SS WW EE RR SS
  60. New World: Disk Based Data Recovery y Disk economics are close to tape 1980’s - 200 MB y Disk is better than tape – Random access to any data y We rearchitected our recovery strategy to take advantage of 1000x increase these economics – Random access allows us to backup and recover just the changes to the database y Backup and Recovery goes from hours to minutes 2000’s - 200 GB
  61. Flash Recovery Area y Unified storage location for all recovery files and recovery related activities in an Oracle Database. – Centralized location for control files, online redo logs, archive logs, flashback logs, backups – A flash recovery area can be defined as a directory, file system, or ASM disk group – A single recovery area can be shared by more than one database y Minimize the number of initialization parameters to set when you create a database – Define a database area and flash recovery area location – Oracle creates and manages all files using OMF Database Area Flash Recovery Area
  62. Flash Recovery Area Space Management Disk limit is reached and a Oracle Archive Logs & Flash new file needs delete files that Recovery Database File to be written are no longer Backups Area into the Flash required on Recovery Area disk. Space Pressure occurs Warning Issued to user RMAN updates 1 list of files that 2 may be deleted Backup Files to be deleted
  63. Benefits to Using a Flash Recovery Area y Unifies the storage location of related recovery files y Manages the disk space allocated for recovery files automatically y Simplifies database administrator tasks y Much faster backup y Much faster restore y Much more reliable due to inherent reliability of disks