Tuesday 18 December, 2007

Unicode System - Effects on SAP Server

Unicode influences the following factors:
=================================================
- Load of applications on hardware (CPU & RAM)
- Seehttp://service.sap.com/unicode
- Seehttp://service.sap.com/performance
- Result: about +30% to +35% CPU and +50% RAM
- Size of database
- See http://service.sap.com/unicode
- The result depends on the usage of UTF-8 (+36%) or UTF-16 (+60% to 70%).

Performance of the database
- Ask the database vendor for the performance results of databases
using Unicode.

Unicode systems use between 1 and 5 bytes of space to store single characters, based on the character and its Unicode representation. As a result, in a simple estimate, the resources of the system could be doubled to be prepared for worst-case scenario.

In reality, the load on the CPU caused by the applications, in this case, the SAP software, increased by about 30-35%. The load on the RAM, again caused by application software and not database software, increased by about 50%.

The size of the database of Unicode systems, compared to non-Unicode systems, depends largely on the type of Unicode representation chosen. UTF-16 (+60% to 70%) requires more space than UTF-8 (+36%).

The performance of the database system is also affected. For performance of data
on the database systems under Unicode, contact your database manufacturer.

To send POP UP to All in SAP - INSTANT POPUP

SEND POP UP TO ALL - INSTANT POPUP in SAP
==========================================

You could use function module TH_POPUP directly from transaction SE37,
or if you search the Web (google) for ZPOPUP or so, you would find some ABAP program examples that uses this FM....

How does an SAP Work Process access memory?

An SAP process accesses memory in the following way:
-------------------------------------------------------

• In dialog processes (Process type DIA)
1. The first section of the roll area up to parameter ztta/roll_first - then -
2. Extended memory up to parameter ztta/roll_extension -or- extended memory (size: em/initial_size_MB) is exhausted - then -
3. The rest of the roll area (size: ztta/roll_area) - then -
4. Process local memory up to parameter abap/heap_area_dia -or- operating system swap space is exhausted -or- the process's data segment size is exceeded -or- the process's address space is exceeded -or- the operating system's address space is exceeded.

NOTE: Under Windows NT the allocation sequence of non-dialog work processes is the same as with a dialog work process.

• In non-dialog Processes (Process type VB, UP, BTC)
1. The total roll area (size: ztta/roll_area) - then -
2. Process local memory up to parameter abap/heap_area_nondia -or- operating system swap space is exhausted -or- the process's data segment size is exceeded -or- the process's address space is exceeded -or- the operating system's address space is exceeded. - then -
3. Extended memory up to parameter ztta/roll_extension -or- extended memory (size: em/initial_size_MB) is exhausted.

Monday 16 July, 2007

Check User Activities for days-weeks by tcode-report....

Check User Activities for days-weeks by tcode-report
=======================================================================

You can use transaction code ST03N.

**Check out TCodes STAT or STATTRACE**

1. Go to tx code - ST03N
2. Under "Workload" you can select your "Instance or Total" so that it expands and shows you Days, Week, Month.
3. If you want to look at the transactions executed for a particular day, lets say, then double click on any day.
4. At the bottom left you get "Analysis Views"
5. Select & expand "User and Settlement Statistics"
6. Double click on "User Profile"
7. On the right side of the window you get a list of all the users
8. Double click on a particular user you want to view the details of.
9. The new screen will display the "Transactions/Reports Utilized by User XXX"

If you wanna track which users executed a particular transacation then follow this:

10. In "Analysis Views" expand "Transaction Profile"
11. Double click on "Standard"
You can view the list of Transactions and Reports.
12. Double click on the Tx Code or Report you wanna check and it will show the Use of it.

LICENSING THE J2EE ENGINE.......

Licensing the J2EE Engine
======================================================
Once the J2EE Engine has been installed, you can log on, since a temporary license has automatically been installed.

You then have to request and install a permanent license from SAP.



If you have installed the SAP Web Application Server with ABAP and J2EE, you can follow the licensing procedure used in earlier releases of the SAP Web AS. The documentation can be found in SAP License.

There are two types of SAP licenses – permanent and temporary licenses

· Permanent License

How you go about getting a permanent license from SAP is explained in Requesting and Installing an SAP License.

· Temporary License

If your permanent license has expired, as a short term measure you can install a temporary license.

In the Visual Administrator choose Server 0 ® Services ® Licensing Adapter. Then choose tab page Runtime ® Installed Licenses and Install subsequent temporary license.

This is valid for 28 days. By then you should have installed a permanent license again.



Note that you cannot install another temporary license, if the expired license is also a temporary license.



A newly installed license does not take affect until the J2EE Engine has been restarted. It is irrelevant whether the license is a permanent or a temporary license.

=======================================================


Requesting and Installing the SAP License
===========================================

You need a valid SAP license to log on to the SAP Web AS. When the SAP Web AS is installed, a temporary license is also installed, which you must replace with a permanent license.


Note that with a J2EE+ABAP installation (SAP Web Application Server with ABAP and J2EE), you have to import the ABAP license (see SAP License). You can then ignore this section.

Prerequisites
You have installed the SAP J2EE Engine and started the Visual Administrator.

Procedure
...

1. In the Visual Administrator choose Server 0 ® Services ® Licensing Adapter. The system data that you need to request the license from the SAP Service Marketplace appears.

¡ Installation number (if it exists)

¡ System ID,

¡ System number (if it exists)

¡ Hardware key

¡ Current release

2. Under the Internet address service.sap.com ® mySAP Business Suite, you can get to the initial page of the license key requests in SAP Service Marketplace. Here you will find all the information you need to request license keys.

3. Enter your e-mail address in the request. The license key will be sent to you promptly by e-mail. Alternatively, you can also download the license key from the SAP Service Marketplace.

Do not make any changes to the license key. To import the license key, the file must not have been changed.

4. In the Licensing Adapter in the Visual Administrator choose Install License from File.

5. Select the license file that you want from SAP.

Result
The license has been installed.

You can view all the licenses installed in your SAP System, by choosing, in the Visual Administrator, Server ® Services ® Licensing Adapter, then tab Runtime ® Installed Licenses.

Additional Information
For more information about requesting license keys, see SAP note 94998.

TELNET-NFS-DNS-NFSCORE Settings Enabling...

TELNET-NFS-DNS-NFSCORE Settings Enabling
---------------------------------------------------------

To enable Telnet:
1, un-comment below line in /etc/inetd.conf
telnet stream tcp nowait root /usr/lbin/telnetd telnetd
2, issue inetd -c command
#inetd -c

To enable NFS server:
1, vi /etc/rc.config.d/netdaemons

export XNTPD=1

2, edit /etc/exports
/tmp/dir_temp root=hosta
/home -a
...and so on....
3, start
#/sbin/init.d/nfs.core start
#/sbin/init.d/nfs.server start
4. Export file system
#exportfs -a

4, exportfs

Enable NFS client:
1, set at boot time:
vi /etc/rc.config.d/netdaemons
NFS_CLIENT=1
NFS_SERVER=0
enable 02 service :
#/sbin/init.d/nfs.client start
#/sbin/init.d/nfs.core start
NOTE: 02 above services are started at level 02 by default!
2, vi /etc/fstab : add mount point and server
myhp:/home /home03 nfs rw 0 0
myhp:/usr/share/man /usr/share/man nfs ro 0 0
3, create mount point
cd /home03 /usr/share/man
4. mount manually if desire!
#mount myhp:/home /home03


Enable DNS require much time than other.
DNS SERVER CONFIGURATION

1, Create /etc/name.data directory
2, Create /etc/name.data/param file
#vi param
-d boota.com
-n 192.168.3
-z 192.168.3.10
-b /etc/named.boot
-s
3, create database
#hosts_to_name -f param
4, download db.cache from internet and put it into /etc/named.data
5. Modify
#vi /etc/rc.config.d/namesvrs
NAMED=1
6. start named service
#/sbin/init.d/named start
CLIENT CONFIGURATION
1, vi /etc/nsswitch.conf
hosts: dns files
2, vi /etc/resolv.conf
search your.domain.com
nameserver 10.0.76.1 ---IPofDNSserver
nameserver 10.0.76.2 ---IPofDNSSeconndserver

Friday 29 June, 2007

Critical Tasks/Daily Tasks/Daily Activities to be performed on SAP system

Critical Tasks/Daily Tasks/Daily Activities to be performed

There are a few critical tasks that should be completed every morning. These tasks answer the following questions:
• Is the R/3 System running?
• Did the backups execute and complete successfully?
If the answer to either question is “no,” then the situation must be resolved quickly because:
• If the R/3 System is down, no work can be done.
• If the backups failed, and a disaster occurs, you could lose all the data since your most recent good backup.

Verify that SAP is running

Your first task of the day is to perform a high-level check to see if the R/3 System is running.

Why?

If the system is not running, your users will be calling to find out what happened and when the system will be up again.
As a basic level check, if you can connect to the R/3 System, the following questions are answered:
• Is the R/3 System working?
• Is the network between you and the R/3 System working?

How?

From a workstation, log on with the SAP GUI. If you can log on, the test is successful.

Verify that the Backups Ran Successfully

What

You need to verify that the backups that were supposed to run last night ran successfully.
Backups of the SAP database and related non-database operating system level files are essential to recover the SAP System.
Types of non-database files include:
• Database log dumps
• Data files for third-party applications that do not store their data in the system
Examples of such files are external tax files.
• Transport files
• Inbound and outbound interface files
• Externally stored print files

Why

If there is a problem with any of the backups, the problem needs to be quickly resolved. If a database failure occurs that requires a restore, and the last backup failed, you will have to recover using the last successful backup. If you do not have a good (usable) backup, you will have to go to an older backup. This process requires applying more logs the further back you go and increases the time required to restore the database and bring it current.
Once the problem has been fixed, if it does not significantly impact performance, execute an online backup. Even if it impacts performance, your company may make it policy to run the online backup. This step gives you a more recent backup.

At the operating system level, some of these files may need to be in sync with the R/3 database. Restoring the R/3 System without these files results in an incomplete (unusable) restore (for example, external tax files that need to be in sync with the system data or the tax systems reports will not match the R/3 reports).

When

These critical tasks need to be done first thing in the morning. If there is a “graveyard” operations shift, the backup check should be done once the backup job is complete. The “graveyard” shift is the third shift of the day and is typically from 10:00 p.m. to 7:00 a.m.

Caution: Any failed backup must be immediately investigated and resolved. Do not maintain a “we will just run the backup again tonight and see if it works” attitude. If that backup fails, you have another day without a backup.

Users (Transaction AL08)

What

This transaction displays all the users who are currently logged on to the system. It shows both the user’s ID and terminal name.

Why

In a smaller company, the administrator can recognize user IDs logged on to unfamiliar terminals. This step may indicate that someone—other than the designated user—is using
that user ID. A user is logged on to more than one terminal may indicate that the user ID is
being used or shared by more than one person.

OS Monitor (Transaction OS06)

What

The system logs are where the operating system and some applications write event records.
Depending on the operating system, there may be multiple logs.

Why

There may be indications of a developing problem (for example, a hard drive generating errors or a failing drive that needs to be replaced).

Select Background Jobs/Graphical Job Monitor (Transaction SM37/RZ01)

What

Background jobs are batch jobs scheduled to run at specific times during the day.

Why

If you are running critical jobs, you need to know if the job failed, because there may be other processes, activities, or tasks that are dependent on these jobs.

CCMS Alert Monitor (Transaction RZ20)

What

Transaction RZ20 is a centralized alert monitor and is new with Release 4.0. With this transaction, you can monitor the servers in your landscape, such as development, QA, testing, production, etc. You no longer have to individually log into each system to search for alerts. If there is an alert, the monitor will link to many of the other transactions.

Why

An alert indicates a potentially serious problem that should be quickly resolved. If not contained, these problems could degenerate into a disaster.

Users (Transaction SM04)

What

This transaction displays all the users who are currently logged on to the system. It shows both the user’s ID and terminal name.

Why

In a smaller company, the administrator can recognize user IDs logged on to “unfamiliar” terminals, indicating that someone—other than the designated user—is using that user ID.

A user logged on to more than one terminal indicates that the user ID is being:
• Used by someone else
• Used or shared by several people

Lock Entry List (Transaction SM12)

What

A lock is a mechanism that prevents other users from changing the record on which you are working. An example that illustrates the importance of using this function follows.

EXAMPLE:
You are changing a customer mailing address. Someone else is changing the customer’s telephone number at the same time. You save your change first; then the other person saves their change. The other person’s change overwrites your change, and your change will be lost.

Why

There may be old locks still in place from transactions that did not release, or from when the user was cut off from the network. Unless cleared, these locks prevent access or change to the record until the system is cycled. The easiest way to locate them is to look for locks from prior days.

Caution: We presume that the profile parameter rdisp/gui_auto_logout has been set. This parameter defines an automatic logout of the user if there is no activity for the set number of minutes.

Update Records (Transaction SM13)

What

A failed update, or an “update terminate,” is an update to the failed database. These failed updates occur when a user entry or transaction is not entered or updated in the database.

The following analogy should help clarify this concept:
1. A secretary gives a file-clerk a folder (similar to a save).
2. The file-clerk gives the secretary a receipt (similar to the R/3 document number).
3. On the way to the file cabinet, the clerk falls, and gets hurt.
The folder in not put into the cabinet (this is the failed update).
4. The end result is the folder is not in the cabinet—even though the secretary has the receipt.

For performance reasons, the database update is done in asynchronous mode. In this mode, the user continues to work while the system takes over the update process and waits for the database update to complete. In synchronous mode, users would have to wait until the database successfully updated before they could continue to work.

Why
The users probably received a document number, so they assume that the entry is in the system; however, if a failed update occurred, the entry is not in the system. In a customer order, unless the order is reentered, the customers would not get their order and no trace of it would be found in the system!

System Log (Transaction Sm21)

What

The system log is the R/3 System’s log of events, errors, problems, and other system messages.

Why

The log is important because unexpected or unknown warnings and errors could indicate a serious problem.

Batch Input (Transaction SM35)

What

This transaction shows jobs that need to be processed or started, and jobs with errors that need to be resolved.

Why

This transaction is important because it alerts you to batch input jobs that are:
• New
These are jobs that are waiting to be processed (for example, a posting from an interface file). If not processed, the data will not post to the system.
• Incorrect
These are jobs that have failed due to an error. The danger is that only a portion of the job may have posted to the system. This increases the potential for data corruption of a different sort, as only part of the data is in the system.

Work Processes (Transaction SM50 and SM51)

What

These transactions allow users to view the status of work processes and monitor for problems. Transaction SM51 is a central transaction from which you can select the instance to monitor. SM51 starts transaction SM50 for each application server. Transaction SM50 is used for systems without application servers.

Why

Transaction SM51 is one place to look for jobs or programs that may be “hung,” (indicated by long run times). If batch jobs are not running, if all the batch work processes are in use, transaction SM50 may provide a hint of the problem.

Spool (Transaction SP01)

What

The spool is the R/3 System’s output manager. Data sent to the printer is sent to the R/3 spool and then sent to the operating system to print.

Why

There may be problems with the printer at the operating system level. These problems need to be resolved immediately for time-critical print jobs (for example, checks, invoices, shipping documents, etc.) or there may be an operational impact. Active spool jobs that have been running for over an hour could indicate a problem with the operating system spool or the printer.

Tune Summary (Transaction ST02)

What

The buffer tune summary transaction displays the R/3 buffer performance statistics. It is used to tune buffer parameters of R/3 and, to a lesser degree, the R/3 database and operating system.

Why

The buffer is important because significant buffer swapping reduces performance. Look under Swaps for red entries. Regularly check these entries to establish trends and get a feel of the buffer behavior.

Workload Analysis of (Transaction ST03)

What

Workload analysis is used to determine system performance.

How
Check statistics and record trends to get a feel for the system’s behavior and performance.
Understanding the system when it is running well helps you determine what changes may need to be made when it is not.

Database Performance Analysis (Transaction ST04)

What

A high-level database performance monitor.

Why

This transaction provides the ability to:
• Monitor the database in relation to:
 Growth
 Capacity
 I/O statistics
 Alerts
• Drill down for additional information.
• Monitor the database without logging on to it.

ABAP Dump Analysis (Transaction ST22)

What

An ABAP dump (also known as a short dump) is generated when a report or transaction terminates as the result of a serious error. The system records the error in the system log (transaction SM21) and writes a snapshot (dump) of the program termination to a special table. This transaction can also be called from the system log (transaction SM21).

Why

You use an ABAP dump to analyze and determine why the error occurred, and take corrective action.

The Step By Step Solution - SAPGUI using HTML

The Step By Step Solution - SAPGUI using HTML


Activate the necessary ICF services

With transaction SICF and locate the
services by path

/sap/public/bc/its/mimes
/sap/bc/gui/sap/its/webgui

nice to have:
/sap/public/ping

Activate the full path to these services
with the context menu.
Make sure that compression is switched
off for all services in these service trees.

Publish the IAC Services

With Transaction SE80 locate from
the menu Utilities 􀃆Settings 􀃆
Internet Transaction Server (Tab) 􀃆
Publish (Tab) and set “On Selected
Site” = INTERNAL.

This restricts the publication in the next
step to the integrated (internal) ITS.

Locate the Internet Services
SYSTEM
and
WEBGUI.

Publish these services with the Context
Menu -> Publish -> Complete Service

Browse to
http://:/sap/bc/gui/sap/its/webgui/!
and login to the webgui.

SAP System Administration: Cleaning the System

SAP System Administration: Cleaning the System

----------------------------------------------------------

There are a number of jobs that should be scheduled to run on a daily, weekly, or monthly basis. You can run the jobs manually first, and decide how often they should run for the particular system, and then schedule them to run automatically. These jobs delete obsolete files such as print logs, job logs, and ABAP dumps.

RSBTCDEL: Deletes old batch jobs (sugg. older than 30 days)
RSPO0041: Deletes old print jobs (sugg. older than 7 days)
RSBDCREO: Reorganizes batch sessions and logs (sugg. older than 7 days)
RSSNAPDL: Deletes short dumps (sugg. default values)
RSBPSTDE: Deletes old job statistics (sugg. older than 30 days)
RSCOLL00: Statistics collector for performance monitor (this actually is not a cleanup job, but it should be scheduled hourly).

Additionally, there are some cleanup jobs that should be executed manually on a regular basis (usually once a month). These include:
sqldba: Use the Show/Cleanup commands.
tp: Use the check all and clearold all commands. Make sure to backup the transport directory first.


TemSe (temporary seuential data) database check: Transaction SP12. Execute the menu command TemSe database->Consistency Check and delete inconsistent data.

Thursday 28 June, 2007

SAP Router Installation & Config...

SAP Router Installation and Configuration
----------------------------------------------------------------------------------

The first thing you need to do, is to send a customer message to SAP
Support (component XX-SER-NET-OSS-NEW) and tell them to register the
hostname and IP of your new SAProuter.

You have to register it with a official IP address (no internal IPs
allowed), but it's allowed to use NAT in the firewall/router.

After you've received a confirmation from SAP that your SAProuter has
been registered, you are ready to configure your SAProuter.

If your SAProuter directory is C:\usr\sap\saprouter, these are the steps
to follow.

Note: You will be asked for a PIN code. Just pick your own 4 numbers, but
you'll have to use the same PIN every time you're asked to enter one.

1. Set 2 environment variables: SECUDIR and SNC_LIB according to the
guide you've downloaded.

2. Download the SAP Crypto Library and unpack it into
C:\usr\sap\saprouter

3. To generate a certificate request, run the command:
sapgenpse get_pse -v -r C:\usr\sap\saprouter\certreq -p
C:\usr\sap\saprouter\local.pse ""

4. Then you have to follow the guide and request the certificate from
http://service.sap.com/tcs -> Download Area -> SAProuter Certificate

5. Create a file C:\usr\sap\saprouter\srcert and copy the requested
certificate into this file. The run the command:
sapgenpse import_own_cert -c C:\usr\sap\saprouter\srcert -p
C:\usr\sap\saprouter\local.pse

6. To generate credentials for the user that's running the SAProuter
service, run command:
sapgenpse seclogin -p C:\usr\sap\saprouter\local.pse -O
(this will create the file "cred_v2")

7. Check the configuration by running command:
sapgenpse get_my_name -v -n Issuer
(This should always give the answer "CN=SAProuter CA, OU=SAProuter,
O=SAP, C=DE")

8. Create SAProuter service on Windows with the command:
ntscmgr install SAProuter -b C:\usr\sap\saprouter\saprouter.exe -p
"service -r -R C:\usr\sap\saprouter\saprouttab -W 60000 -K
^p:^"

9. Edit the Windows Registry key as follows:
MyComputer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SAProute
r\ImagePath --> Change both ^ to "

10. Start the SAProuter service

11. Enter the required parameters in OSS1 -> Technical Settings

---------------------------------------------------------------------------------

Installation on UNIX
----------------------------------------------

1. Create the subdirectory saprouter in the directory /usr/sap/.

2. Get the latest version of the SAProuter from the SAP Service Marketplace (service.sap.com/patches). Choose Support Packages and Patches ® Entry by Application Group ® Additional Components ® SAPROUTER. The SAProuter is in packet saprouter*.SAR; the program niping is also in this packet. Copy programs saprouter and niping to the newly created directory /usr/sap/saprouter.

If you cannot copy the programs from SAP Service Marketplace, you can copy a version (may be obsolete) from your directory /usr/sap//SYS/exe/run.

3. (Optional) If you want to start the SAProuter on the same computer used for an SAP instance, insert the following line into file /usr/sap//SYS/exe/run/startsap:

#

# Start saprouter

#

SRDIR=/usr/sap/saprouter

if [ -f $SRDIR/saprouter ] ; then

echo “\nStarting saprouter Daemon “ | tee -a $LOGFILE

echo “----------------------------“ | tee -a $LOGFILE

$SRDIR/saprouter -r -R $SRDIR/saprouttab \

| tee -a $LOGFILE &

fi



Insert the lines before the commands to start the SAP instance.

Normally the SAProuter runs on a different computer. If this is so, this step is omitted and you start the SAProuter as described in Starting the SAProuter.

4. Maintain the route permission table in directory /usr/sap/saprouter. If you want to keep it in another directory or under a name other than saprouttab, you must specify this with the SAProuter option -R (see Option R ).

This should help in SAP Router configuration and installation.

SAP Router Help.......

SAP ROUTER Information................

Check out the links below and also the sample routtab file:

http://service.sap.com/saprouter

http://help.sap.com/saphelp_nw04/helpdata/en/4f/992ce8446d11d189700000e8322d00/frameset.htm

===========================================
http://help.sap.com/saphelp_erp2005vp/helpdata/en/26/95563cda79d734e10000000a11402f/frameset.htm
============================================

Cust SAP Server -- SAP router (SNC) ---Internet--- SAP SAProuter (SNC) -- SAP

SAP use SNC (Secure Network Communication) to support their customer. You have to set your SAProuter to be SNC-ready before SAP can help you if you have serious problem on your SAP server.

I suggest not installing SAP router on the same server. Install SAP router on your site using different box. You can place SAP router (with SNC configuration) on DMZ of your internal network.

=======================================

SAP Network Interface Router
---------------------------------------
start router : saprouter -r
stop router : saprouter -s
soft shutdown: saprouter -p
router info : saprouter -l (-L)
new routtab : saprouter -n
toggle trace : saprouter -t
cancel route : saprouter -c id
dump buffers : saprouter -d
flush " : saprouter -f
start router with third-party library: saprouter -a library

additional options
--------------------------
-R routtab : name of route-permission-file (default ./saprouttab)
-G logfile : name of log file (default no logging)
-T tracefile : name of trace file (default dev_rout)
-V tracelev : trace level to run with (default 1)
-H hostname : of running saprouter (default localhost)
-S service : service-name / number (default 3299)
-P infopass : password for info requests
-C clients : maximum no of clients (default 800)
-E servers : maximum no of servers to start (default 1)
-K [myname] : activate SNC; if given, use 'myname' as own sec-id
-A initstring: initialization options for third-party library

expert options
------------------------------
-B quelength : max. no. of queued packets per client (default 1)
-Q queuesize : max. total size for all queues (default 20000000 bytes)
-W waittime : timeout for blocking net-calls (default 5000 millisec)
-M min.max : portrange for outgoing connects, like -M 1.1023
-U abs_path : absolute path for Unix Domain Sockets,
default is "/tmp/.sapstream%d"

# this is a sample routtab : -----------------------------------------
D host1 host2 serviceX
D host3
P * * serviceX
P 155.56.*.* 155.56
P 155.57.1011xxxx.*
P host4 host5 * xxx
P host6 localhost 3299
P host7 host8 telnet
S host9
P0,* host10
KP sncname1 * *
KS * host11 *
KD "sncname "abc" * *
KT sncname3 host11 *

# deny routes from host1 to host2 serviceX
# deny all routes from host3
# permit routes from anywhere to any host using serviceX
# permit all routes from/to addresses matching 155.56
# permit ... with 3rd byte matching 1011xxxx
# permit routes from host4 to host5 if password xxx supplied
# permit information requests from host6
# permit native-protocol-routes to non-SAP-server telnet
# permit ... excluding native-protocol-routes (SAP-servers only)
# permit ... if number of preceding/succeeding hops (saprouters) <= 0/*
# permit SNC-connection with partnerid = 'sncname1' to any host
# permit all SAP-SAP SNC-connections to host11
# deny all SNC-connections with partnerid = 'sncname "abc'
# open connects to host11 with SNC enabled and partnerid = 'sncname3'

# first match [host/sncname host service] is used
# permission is denied if no entry matches
# service wildcard (*) does not apply to native-protocol-routes
# --------------------------------------------------------------------

Wednesday 27 June, 2007

Transport is running & running.....(STMS)

Your trasport request keeps on running and running......

Take a look at tables TMSTLOCKR and TMSTLOCKP. These contain list of transports currently running. These do not always clear when a transport has completed.
If they have an entry for your change request then it will need to be removed.

I usually do this at the database level with a SQL statement.

If this does not resolve your problem check that the usr/sap/trans/tmp directory
for a .lob file. If one exists delete this and then rerun the last transport.

SQL Query must be something like this:

delete from XYZ.TMSTLOCKR where TRKORR = 'XXXYYYY'

XXXYYYY - Request Number

XYZ - Owner of the table

Friday 15 June, 2007

Locking/Unlcoking a Client in a SAP System....

Locking/Unlcoking a Client in a SAP System....

To lock or unlock a client only in an in R/3 System, run the following functions via tcode SE37:

SSCR_LOCK_CLIENT - run this function module using SE37 transaction and specify the name of the client to be locked. In this case the SAP default users will be able to login.

SSCR_UNLOCK_CLIENT - run this function module using SE37 to unlock the client.

Locking/unlocking the entire system, an admin can use the following commands:

1. tp locksys pf=

2. tp unlocksys pf=

- transport profile in /usr/sap/trans/bin usually -> TP_DOMAIN_.PFL

If you want to lock a client then another way is to lock all the users so that no one can log in to that particular client which can be done using transaction SU01 for small number of users and SU10 if there are large number of users.

Note Before you do mass locking of users in SU10 make sure you make a note of all the users who are already locked. Otherwise when you unlock the users in SU10 those locked users may alsp be unlocked.

Determining whether 32 bit or 64 bit Windows....

HOW TO: Determine Whether Your Computer Is Running a 32-Bit Version or 64-Bit Version of Windows XP

SUMMARY

This step-by-step article describes how to determine whether a computer is running a 32-bit Microsoft Windows operating system or a 64-bit Microsoft Windows operating system. To determine whether your computer is running a 32-bit Windows operating system or a 64-bit Windows operating system, examine the system information. To do this, use one of the following methods.

Note Intel Itanium-based computers can only run 64-bit versions of Windows. Intel Itanium-based computers cannot run a 32-bit Windows operating system. Currently, 64-bit Versions of Windows only run on Itanium-based computers.

Use My Computer to Check System Properties

1. Click Start, right-click My Computer, and then click Properties.
2. In the System Properties window, click the General tab.
3. Note the first line under Computer: • On 64-bit systems, Itanium or Itanium 2 appears on this line.
• On 32-bit systems, a value other than Itanium or Itanium2 appears on this line.

Use the Winmsd.exe Tool to Check the System Type

1. Click Start, and then click Run.
2. Type winmsd.exe in the Open box, and then click OK.
3. Locate System Type in the right pane under Item. Note the value. • If the value that corresponds to System Type is X86-based PC, the computer is running a 32-bit version of the Windows operating system.
• If the value that corresponds to System Type is Itanium–based system, the computer is running a 64-bit version of the Windows operating system.


Use the Winmsd.exe Tool to Check the Processor

1. Click Start, and then click Run.
2. Type winmsd.exe in the Open box, and then click OK.
3. Locate Processor in the right pane under Item. Note the value. • If the value that corresponds to Processor starts with x86, the computer is running a 32-bit version of the Windows operating system.
• If the value that corresponds to Processor starts with ia64, the computer is running a 64-bit version of the Windows operating system.

.....This article is from Microsoft Website.....
--------------------------------------------------------------------------------

APPLIES TO
• Microsoft Windows XP Professional
• Microsoft Windows XP Tablet PC Edition
• Microsoft Windows XP Media Center Edition 2002
• Microsoft Windows XP Home Edition
• Microsoft Windows XP Professional 64-Bit Edition (Itanium)
• Microsoft Windows XP Professional 64-Bit Edition (Itanium)
• Microsoft Windows XP Professional 64-Bit Edition (Itanium)

Send e-mails of any SAP report in PDF format...

It is possible to send e-mails of any SAP report in PDF format!

Follow these steps:

* Activate smtp and go to transaction sicf.
* Right click in SAPconnect and select activate SMTP host.
* Configure smtp and go to transaction scot.
* Double click in the SMTP node of INT.
* Select "node in use".
* In SMTP connection enter the smtp address of your company (i.e., smtp.company.com) and the mail port (usually 25).
* Click the SET Internet button and put * as the address (you can make some restrictions here, i.e., *.company.com).
* Click OK.
* Select settings. The default is domain.
* Execute the RSPO0075 report in SE38 tcode and create the e-mail method "M".
* Create a new output device in the SPAD transaction and enter PDF1 as the device type and M as the access method. No e-mail is required.

Thats all. Then, when you want to send any report by e-mail in a PDF format, you should pull up the report to your screen. Then go to print, select the output device created before and enter the destination e-mail address. Don't forget to select "print out immediately."

You also need to be sure to have the RSCONN01 report scheduled to run periodicaly (i.e., every 15 minutes).
This report is resposible to deliver the SAPconnect objects.

Note: This tip only works from WAS620.

Plant 001 is missing in ECC 6.0 (EC02 Tx code)

Plant 001 is missing in ECC 6.0 (EC02 Tx code)

When you install ECC 6.0, your functional consultants may come to you and report this error..

You will not be able to use the Plant 0001 in your system even though it exists in the system.

Reason for this system-behavior is, that Plant 0001 does not have
a valid addressnumber and as a result also has no corresponding
entry in the Central Address Management (CAM).

Many Applications in the R/3-System using Searchhelp H_T001W or
similar searchhelps.

This Searchhelp uses (among others) the following parameter:

T001W-ADRNR = ADRC-ADDRNUMBER

In your case, T001W-ADRNR is initial and a corresponding entry in
ADRC is missing. That´s the reason why 0001 is not displayed.

SAP does not recommend the useage of Plants without address data.

To avoid this "error", simply re-maintain the missing address data
(e.g. via OX10 => "Envelope" - Button). During saving the changes,
also a new addressnumber will be taken from the addressnumber-range,
and the corresponding ADRC-Entry will be created.

Once you maintain this data you can use the Plant 0001 for making another Plants (copying).

Thursday 14 June, 2007

SAPJSF User Problem in IDES ECC during installation...

Workaround for the Installation Problem when Installing the JAVA-AddIn

Step1:
Log on to the IDES ECC Abap system with user “idadmin”/Password “ides123” (client 001)

- Enter transaction code SNRO
-> Enter Object Name: AENDBELEG
-> Select Button: Number Ranges

-> Edit Intervals

-> Enter the following values as shown:

No. - 01
From number - 1
To Number - 9999999999
Current Number - 0

-> Save


Step 2
Maintain User Data for User SAPJSF

In the IDES ECC Abap System, enter transaction code SU01
-> change User Type to “System”
-> Enter the Master Password (that you defined during the Installation Process)
-> Save


enter Transaction code PFCG
-> Enter Role Name: SAP_BC_JSF_COMMUNICATION_RO
-> Choose TabPage User
-> Assign User SAPJSF to this role
-> Save

!Ignore message: No active plan version exists!


-> Repeat Installation Process on the Application Server

Step 3
Maintain User Data for User J2EE_ADMIN (as done before for user SAPJSF)

enter Transaction code PFCG
-> Enter Role Name: SAP_J2EE_ADMIN
-> Choose TabPage User
-> Assign User J2EE_ADMIN to this role
-> Save



-> Repeat Installation Process on the Application Server

Step 4
Maintain User Data for User J2EE_GUEST (as done before for user SAPJSF)

enter Transaction code PFCG
-> Enter Role Name: SAP_J2EE_GUEST
-> Choose TabPage User
-> Assign User J2EE_GUEST to this role
-> Save

-> Repeat Installation Process on the Application Server

Exchange Infrastructure (XI) in a Nutshell (Getting Started).....

Getting Started with Exchange Infrastructure now becomes easy.....

Here is a blog with everything you need to start working on XI.....

Check out the transactions which are frequently used:

SXMB_IFR - Start Integration Builder

SXMB_MONI - Integration Engine (Monitoring)

SXI_MONITOR - XI Message Monitoring

SLDCHECK - test SLD Connection

Some more...

SLDAPICUST - SLD API Customizing

SXMB_ADM - Administration of Integration Engine

SXI_CACHE - Directory Cache of XI

SXMB_MONI_BPE - Process Engine - Monitoring

Some very useful URLs....

hostname - is the host name of the server on which XI is running

inst number - is the instance number (example: 50000,50100 for port 00,01)

http://hostname:5inst number00/rep --- Exchange Infrastructure Tools

http://hostname:5inst number00/sld --- System Landscape Directory

http://hostname:5inst number00/rwb --- Runtime Workbench

Some more URLs which can be of use (not in the initial stage)...

http://hostname:5inst number00/MessagingSystem --- Message Display Tool

http://hostname:5inst number00/mdt/amtServlet --- Adapter Monitor

http://hostname:5inst number00/exchangeProfile --- Exchange Infrastructure Profile

http://hostname:5inst number00/CPACache --- CPA Cache Monitoring

http://hostname:5inst number00/CPACache/refresh?mode=delta --- Delta CPA cache refresh

http://hostname:5inst number00/CPACache/refresh?mode=full --- Full CPA cache refresh

Now some transaction related to IDOC's as IDOC play a very important role in XI...

WE60 - Documentation for IDoc types

BD87 - Status Monitor for ALE Messages

IDX1 - Maintenance of Port in IDoc Adapter

IDX2 - Meta Data Overview in IDoc Adapter

WE05 - Lists the IDocs

WE02 - Displays the IDoc

WE19 - Test Tool

WE09 - Search for IDocs by Content

Backup - Restore MS SQL using Enterprise Manager

Backup - Restore MS SQL using Enterprise Manager

--------------------------------------------------------------------------------

Backup:
*********

1) Open the SQL Server Enterprise Manager.
2) Click on "Microsoft SQL Servers" in left pane.
3) Expand your server group.
4) Click on "Databases".
5) Right click in the database for which you wish to take backup.
6) Click on "All Tasks" > "Backup Database". A new backup window will appear.
7) Select "Database-Complete" option.
8) Click on "Add" button in destination frame. A new window appears for selecting backup destination.
9) Select "Filename" option.
10) Enter Path where you wish to save backup along with the file name of the backup set. Please specify extension ".bak" for file name of the backup set.
11) Click "OK" twise.
12) A progress indicatior will show backup progress.


Restore:
*********
Login to the SQL server,
choose and run Enterprise Manager.
Select the database onto which you would like to restore the data,
right click on the database,
select 'Restore database' from under the 'All tasks' menu.
Under the 'General' tab, select 'From Device'.
From under the 'parameters' section, click 'select device'.
From the dialog box that appears, select 'add'. Browse to the .BAK file using the option under 'File'.
Select 'OK' across all sub options and finally the main dialog box to perform the restore operation.

Activating SAP* User in Visual Admin in SAP

Activating SAP* User in Visual Admin

Use

In emergency situations where you configured user

management incorrectly and can no longer log on to any

applications, or all administrator users are locked,

you can activate the emergency user SAP* to allow you

to log on to applications, in particular the

configuration tools, and change the configuration. The

SAP* user has full administrator authorizations. For

security reasons, the SAP* user does not have a

default password, so you must specify a password when

you activate the user.

Procedure
...
1. Activate the SAP* user:
a. Start the config tool for editing UME

properties as described in Editing UME Properties.
Editing UME Setting:
1. Start the Config Tool by executing

\j2

ee\configtool\configtool.bat.
2. In the tree, navigate to Global server

configuration services com.sap.security.core.ume.

service.
The list of UME properties appears.
¡ To change the value of a

property:
i. Select the property in the list.
ii. In Value at the bottom of the screen, enter

the new value that you want to assign to the property.
iii. Choose Set.
iv. Choose (Apply changes).
¡ To restore a property to its

default value:
...
i. Select the property in the list.
ii. Choose Restore to default.
iii. Choose (Apply changes).
¡ To restore all properties to their

default values:
...
i. Choose Restore all to default.
3. Restart the nodes in the cluster for

the changes to take effect.

b. Set the following properties:
Property Value Comment
ume.superadmin.activated true This activates

the SAP* user.
ume.superadmin.password Enter

any password of your choice. This defines the password

for the SAP* user.
c. Restart the Java

application server.
The SAP* user is now activated. While it is activated,

all other users are deactivated. You can only log on

with the SAP* user.

2. Fix your configuration as required, logging on

with the user ID SAP*and the password you specified.
For example, to log on to the Visual Administrator to

unlock users or create a new administrator user,

proceed as follows:
i. Start the Visual Administrator by executing

\j2ee\admin\go.bat.
ii. If you have not already done so, you have to

create a new connection entry in which you specify

SAP* as the user name. For more information, see

Creating a New Connection Entry.
iii. In the Connect to SAP J2EE Engine dialog,

select the connection entry you created and choose

Connect.
iv. Enter the password you specified in the

UMEproperties and choose Connect.
v. Do whatever is necessary to fix your

configuration. For example, unlock all administrator

users.
Similarly, you can log on to the UME administration

console using the SAP* user.
3. When you have fixed your configuration,

deactivate the SAP* user again.
a. Start the config tool for

editing UME properties as described in Editing UME

Properties.
b. Set the property ume.superadmin

.activated to false.
b. Restart the Java application server.

SDM Password Change in SAP....

SDM Password Change

Just enter following lines in your COMMAND LINE:

1. enter SDM directory as: cd C:\usr\sap\J2E\JC00\SDM\program

2. Execute the following (just copy-paste)
sdm jstartup "sdmhome=C:\usr\sap\J2E\JC00\SDM\program" "mode=standalone"

3. Execute following line (replacing Password1 with your new password for SDM)
sdm changepassword "sdmhome=C:\usr\sap/J2E/JC00/SDM/program" "newpassword=Password1"

4. I suggest also revert SDM back to integrated mode
sdm jstartup "sdmhome=C:\usr\sap\J2E\JC00\SDM\program" "mode=integrated"

5. Start SDM - StartServer.bat

When administrator or j2ee_admin user is locked in SAP...

When administrator or j2ee_admin user locked in SAP...
If you forgot or lock "Administrator or J2EE_ADMIN" password just follow the steps below:


STEP-1: Enable "SAP*"

1.Start the Config Tool C:\usr\sap\\\j2ee\configtool\configtool.bat
Ex: D:\usr\sap\F02\JC00\j2ee\configtool --> configtool.bat

2.Goto cluster-data --> Global server configuration --> services --> com.sap.security.core.ume.service

3.Double-click on the property "ume.superadmin.activated = TRUE"

4.Double-click on the property "ume.superadmin.password="

5.Save.

6.Restart the engine.

STEP-2: Login with "SAP*" into portal

1. http://:/useradmin/index.jsp

2. Enter userid / password as" SAP* / "

3. Search for "Administrator" user

4. Reset or change password for "Administrtor"


STEP-3: Disable "SAP*"

1.Start the Config Tool C:\usr\sap\\\j2ee\configtool\configtool.bat
Ex: D:\usr\sap\F02\JC00\j2ee\configtool --> configtool.bat

2.Goto cluster-data --> Global server configuration --> services --> com.sap.security.core.ume.service

3.Double-click on the property "ume.superadmin.activated = FALSE"

4.Save.

5. Restart the engine.


STEP-4: Login with "Administrator"

1. http://:/useradmin/index.jsp

2. Enter userid / Password as "Administrator /

3. it will ask change password just change it.

Administering Databases and Datafiles

Administering Databases and Datafiles

Today you will begin to look at some of the basic tasks the Oracle DBA must perform. Specifically, you will look at the database-creation procedure as well as some of the tuning parameters that must be set.

The database-creation operation is split into the tasks of the DBA and the tasks of the end user or application developer. These tasks are split based on what level they access the DBMS.

The Oracle DBA is responsible for all tasks that relate to the DBMS at the low level. Operations that involve the management of datafiles, redo log files, control files, tablespaces, extents, and segments are the responsibility of the DBA, as are the tasks of creating the tables, indexes, clusters, and views (with certain installations, some of these tasks might be performed by or in conjunction with the application development team). In any case, these responsibilities are addressed separately.

Tasks Involved in Creating a Database

Creating a database involves one Oracle DDL statement, and perhaps weeks or months of preparation to be ready for that one step. To create a database, you must know a lot about the data that will be put into the database, the data-access patterns, and the database's volume of activity. All these factors are used to determine the layout of the datafiles and redo log files. These are the responsibility of the Oracle DBA.

Under Windows NT, you must create the instance before you create the database. Because Oracle runs as a service under NT, the instance is necessary for the database-creation phase. An instance can be created, modified, or deleted through the NT Instance Manager. This should not be confused with the Enterprise Manager instance-management tool. Procedures on how to create this bootstrap instance were covered yesterday.

Creating the database actually occurs in two separate--but related--steps. The first step involves the actual database-creation command. This command creates the redo log files, the control files, and the datafiles necessary to create the SYSTEM tablespace. The SYSTEM tablespace contains the SYSTEM rollback segment, the data dictionary, stored procedures, and other structures necessary to run the Oracle instance.

The second phase involves adding tablespaces, tables, indexes, and so on that are used to store your specific data. The first phase described here is covered today; the remaining tasks necessary to finish creating your database will be described tomorrow. It is only when these additional tablespaces are added and your tables are created that your database is complete.

It is important that the DBA and end user work together in defining the database, because the physical layout and the actual data should be configured in an optimal manner. If you underconfigure the hardware or create a poor database layout, you will see a severe degradation in performance.

Tasks of the DBA

The DBA is responsible for all the low-level formatting of the database. I refer to this as formatting because that is basically what these steps do. When you format a disk, it is checked and zeroed out; likewise, when you create a tablespace and datafile, Oracle essentially checks out the disk space and lays down its internal format on the physical disk.

The DBA is responsible for creating the database, adding datafiles, and managing the control files and redo log files necessary for the proper function of the Oracle RDBMS. The DBA is also responsible for allocating these resources to the end user so that he or she can properly use them. The DBA or developer must then build tables, indexes, and clusters on these tablespaces. After the tables have been built and loaded, the user can then access this data.

Tasks of the User or Developer

It is the responsibility of the developer to relay to the DBA what the structure of the data should be and how it will be accessed. In this way, the DBA can have all of the information necessary to properly lay out the database. It is the responsibility of both the DBA and the application developer to work together to provide a stable and usable environment for the end user.

Designing the Database

Designing the database can be quite complex and time consuming, but well worth the effort. Any mistakes at this point can be very costly in terms of performance and stability of the system in the long run. A well-designed system takes into account the following factors:
Performance--The database that has been designed for performance from the very beginning will outperform any system that has not. Many critical performance items can only be configured in the design stage, as you will soon see.
Backup--Often, the DBA is given only a short time to accomplish the required backup operations. By planning the data layout at the very beginning with this criterion in mind, these operations can more easily be accomplished.
Recovery--Nobody plans for his system to crash, but it is an unfortunate fact of life that hardware and software components sometimes fail. Planning can facilitate the recovery process and can sometimes be the difference between recovering and not recovering.
Function--The database layout has to take into account its ultimate function. Depending on what type of applications are being run and what the data looks like, there might be different design considerations.

Physical Database Layout

As part of the design considerations mentioned previously, the physical layout of the database is very important. You should consider several factors when designing the physical layout, including
Database size--You must be able to support the amount of data you will be loading into the database.
Performance--A physical disk drive can support only a certain number of I/Os before performance begins to suffer.
Function--You might decide to lay out tablespaces based on their function. This allows different departments to have different backup schedules, and so on.
Data protection--It is very important that some types of files be protected against media failure. Primarily, the redo log files and the archive log files need to be protected.
Partitioning--Depending on what type and amount of partitioning you will be doing, the physical layout might vary.

So you gain a complete understanding of how and why the physical database design might vary based on function, let's review a few basic factors.

Database Size

The size of the database is a key factor in how the physical layout is designed. For very small databases, this might not be much of an issue, but for very large databases it can be a major issue. You must make sure that you have not only enough space for the datafiles themselves, but also for associated indexes. In some cases, you might need to have a large temporary area to copy input files to before they are loaded into the database. Oracle has a few restrictions on the size of the components of the database:
The maximum size of a datafile is 32GB (gigabytes).
The maximum number of datafiles per tablespace is 1,022.
The maximum size of a tablespace is 32TB (terabytes).

As you can see, Oracle allows you to create and maintain very large databases. You might think this is an incredible size for a database and no system will ever achieve this size. Well, I can remember when a 10MB disk drive was huge for a PC. If industry trends continue the way they've been going, I would not be surprised to see systems with 32TB tablespaces in the near future.

As you saw on Day 4, "Properly Sizing Your Database and Planning for Growth," it is not only necessary to build your system with today's requirements in mind, but also to plan for the future. Systems can increase in size at incredible rates, and you must be ready for it.

Performance

An important factor to remember when designing the physical layout of your database is the performance of the various components in the system. The load on the system caused by numerous users requesting data will generate a certain amount of disk I/O.

The disk drives that comprise the system can service only so many I/Os per second before the service time (the time it takes for an I/O to complete) starts increasing. In fact, it is recommended that for a standard 7200 RPM SCSI disk drive, you run it at only the following rates:
Random I/O--60-70 I/Os per second per disk drive.
Sequential I/O--100 I/Os per second per disk drive.

NOTE: With a sequential I/O, the data that is requested is either on the same track as the last data accessed or on an adjacent track.
With a random I/O, the data that is requested is on another track on the disk drive, which requires the disk arm to move, thus causing a seek. This track seek takes much more time to complete than the actual reading of the data.

Taking these factors into account, you should isolate the sequentially accessed data and spread out the randomly accessed data as much as possible. A hardware or software disk array is a good way to spread out these randomly accessed I/Os. By determining the amount of I/O traffic that will be generated, you can decide how many disk drives are required. A lack of disk drives can cause severe performance problems. In many cases, you will find that you are required to use many more disk drives for performance reasons than you would for size requirements.

TIP: The redo log files are sequentially accessed, as are the archive log files. These files should be isolated from randomly accessed files in order to increase performance.

Function

You might also find that you want to separate your database into different tablespaces based on function. That way, maintenance operations and backups can be done on a per-department basis. For example, you can put accounting and sales on different tablespaces so they can be backed up separately.

You will also find that different types of operations have different characteristics. For example, an OLTP system that has a large number of updates is very sensitive to the placement of the redo logs due to performance considerations. This type of system might also be continuously creating archive log files that need to be protected and backed up. This requires some planning.

On the other hand, a decision support system (DSS) that primarily queries might not need a high-performance redo log volume, and archiving might occur only once per day. In that case, you might want to design your database layout to favor the datafiles.

Data Protection

The primary job of the DBA is to protect the data in the system. As part of this job, you the DBA must determine how to protect that data. As you saw on Day 2, "Exploring the Oracle Architecture," every change that Oracle makes to the database is written to the redo log files and, in turn, these redo log files are archived. These redo log files and archive log files can be used, in conjunction with a recent backup, to recover the database to the point of system failure. This is, of course, assuming that the redo log files and archive log files are intact.

It is therefore necessary to protect the redo log files and archive log files from media failure. This can be accomplished either via hardware or software fault tolerance. I prefer hardware fault tolerance in the form of a RAID (redundant array of inexpensive disks) subsystem, but software fault tolerance is also very good.

There are several options available with RAID controllers; the most popular are RAID-1 and RAID-5. Each has advantages and disadvantages, as shown here:
RAID-1--Also known as mirroring. The entire contents of a disk drive are duplicated on another disk drive. This is the fastest fault-tolerant method and offers the most protection. It is, however, the most costly because you must double your disk-space requirements.
RAID-5--Also known as data guarding. In this method of fault tolerance, a distributed parity is written across all the disk drives. The system can survive the failure of one disk drive. RAID-5 is very fast for reading, but write performance is degraded. RAID-5 is typically too slow for the redo log files, which need fast write access. RAID-5 can be acceptable for datafiles and possibly for the archive log files.

TIP: It is a good idea to put your operating system and redo log files on separate RAID-1 volumes. This provides the best level of protection and performance.

Typically, the archive log files can reside on a RAID-5 volume because performance is not critical. If you find that you are having trouble keeping up on the archive log writes, you might need to move them to RAID-1.

Your datafiles can reside on a non-fault-tolerant disk volume if you are limited on budget and can afford to have your system down in the event of a disk failure. As long as you have a good backup, you lose no data.
Partitioning

You might also decide to adjust the physical layout of your database based on the partitioning method you have chosen. Oracle has introduced a new partitioning method with Oracle8. Various partitions can be allocated to Oracle tables based on ranges of data. Because the partitioning is actually done at the tablespace level and the tablespaces are made up of datafiles, it is important to plan your partitioning before you build your datafiles.

Because Oracle supports only range partitioning, whether you partition your data is dependent on your application and data. If you can take advantage of partitioning, you will definitely see some advantages in terms of reduced downtime and increased performance.

Creating the Instance

Before you can create the Oracle database under Windows NT or even start up the Oracle instance, you must create an instance. Follow the steps in the previous chapter to create the Oracle instance; start up the instance, and then you can create the database. Because Oracle functions as a service under NT, you cannot create a database without creating the instance.

Creating the Database

When you create a database, you are primarily creating the redo log files, the control files, and the SYSTEM tablespace. This SYSTEM tablespace is where important structures such as the data dictionary are kept. The data dictionary keeps track of all of the datafiles, the database schema, and all other pertinent database information. After you create this initial database, you will create more tablespaces and assign your schema to those tablespaces. So let's continue creating the initial database.

After the instance has been created, you can create the database. Creating the database is done either through Enterprise Manager or with the CREATE DATABASE DDL command. Although Enterprise Manager is quite convenient and easy to use, I prefer to script the creation procedure into a SQL file. By doing this, you can easily run this creation procedure over and over again and modify it for other purposes. This also provides you with a record of how this procedure was done.

Setup

There are a few initial setup steps that should be completed before you begin the actual creation process. These steps are designed to help you create the right configuration as well as to protect yourself from potential future problems. These steps involve the following:

1. Backing up any existing databases on the system

2. Creating the init.ora file

3. Starting up the Oracle instance

If you follow these steps, you should be ready to successfully create an Oracle database.

Let's look at these steps.

Backing up existing Databases

This is purely a precautionary step. It is always a good idea to back up all your databases on a regular basis. It is also recommended that you back up your databases prior to any major system changes, such as the creation of a new database.

No matter how careful you are in preparing for the database creation, there is always some danger in making major changes to the system. Because it is possible that a mistake could affect existing control files, redo log files, or datafiles, this precaution might save you quite a bit of work.

If some unforeseen event causes data loss in an existing database, the recovery process will be facilitated by having a fresh backup. This is just a precaution, and one that is well worth the time and effort.

Creating the init.ora File

It is necessary to create a new parameter file for each new database. The parameter file, also known as the init.ora file, contains important information concerning the structure of your database. All the Oracle tuning parameters are described in Appendix B, "Oracle Tuning Parameters," but a few parameters are critical to the creation of the database:
DB_NAME--This parameter specifies the name of the database. The DB_NAME parameter is a string of eight or fewer characters. This name is typically the same as your Oracle SID (system identifier). The default database was built with DB_NAME = oracle.
DB_DOMAIN--This parameter specifies the network domain where your server resides. This parameter, in conjunction with the DB_NAME parameter, is used to identify your database over the network. The default database was built with DB_DOMAIN = WORLD.
CONTROL_FILE--This parameter specifies one or more control files to be used for this database. It is a very good idea to specify multiple control files, in case of disk or other failures.
DB_BLOCK_SIZE--This parameter specifies the size of the Oracle data block. The data block is the smallest unit of space within the datafiles, or in memory. The DB_BLOCK_SIZE can make a difference in performance, depending on your application. The default size is 2,048 bytes, or 2KB. After the database is built, the block size cannot change.
DB_BLOCK_BUFFER--This parameter specifies the number of blocks to be allocated in memory for database caching. This is very important for performance. Too few buffers causes a low cache-hit rate; too many buffers can take up too much memory and cause paging. This parameter can be changed after the database has been built.
PROCESSES--This parameter specifies the maximum number of OS processes or threads that can be connected to Oracle. Remember that this must include five extra processes to account for the background processes.
ROLLBACK_SEGMENT--This parameter specifies a list of rollback segments that is acquired at instance startup. These segments are in addition to the system rollback segment. This should be set after you create the rollback segments for your database.

The following parameters should also be set, based on your licensing agreement with Oracle:
LICENSE_MAX_SESSIONS--This parameter specifies the maximum number of concurrent sessions that can connect into the Oracle instance.
LICENSE_SESSION_WARNING--This is similar to LICENSE_MAX_SESSIONS in that it relates to the maximum number of sessions that can be connected into the instance. After LICENSE_SESSION_WARNING sessions have connected into the instance, you can continue to connect more sessions until LICENSE_MAX_SESSIONS has been reached, but you will receive a warning from Oracle that you are reaching your limit.
LICENSE_MAX_USERS--This parameter specifies the maximum number of unique users that can be created in the database.

After these parameters are set, you can move on to the next phase: starting up the Oracle instance.

Starting Up the Oracle Instance with NOMOUNT

Before you start up the Oracle instance, check your SID. This will indicate which database you will connect to. You should typically set your SID to the same name as in the DB_NAME parameter. When your application connects into Oracle, it uses the SID to determine which database (if there is more than one) to connect to. Depending on the application and your network, the SID might be used to connect you to a particular database on a particular system via SQL*Net.

This is similar to starting up the instance as shown yesterday, except that to create a database, the instance be must be started with the NOMOUNT option (this is because no database associated with that instance is available to mount). After the SID has been checked, you can then start the Oracle instance. This can be accomplished in two ways: by using the Oracle Instance Manager or by using Server Manager. Both methods are presented here.

Starting the Instance with Server Manager

The way I prefer to build a database is by scripting it into a command file. That way, I will have a permanent record of what I have done to create the database. The first command in my script will be to start the Oracle instance in NOMOUNT mode as follows:

connect internal/oracle

startup [pfile=c:\orant\database\initORCL.ora] NOMOUNT;


NOTE: The brackets indicate an optional parameter. If the pfile parameter is not specified, c:\orant\database\initSID.ora will be used (where SID is the value of your SID environment variable).

By scripting, you can reuse this in the event you need to re-create the database or as a template for other database creations.

Creating the Database

After you have created the instance, you can move on to the next stage: creating the database itself. As with the instance, it is possible to create the database both from a graphical tool (in this case, the NT Instance Manager) or from the command line or script using the Oracle Server Manager. Here you will look at both methods. I prefer character-based creation because it can be scripted and thus re-used.
Creating the Database with Server Manager

To create the database with Server Manager, you must type it manually or, as I prefer, use a SQL script. The database is created with the CREATE DATABASE command.
The Syntax for CREATE DATABASE

The syntax for this command is as follows:

SYNTAX:

CREATE DATABASE [[database]
[CONTROLFILE REUSE]]
LOGFILE [GROUP group_number] logfile
[, [GROUP group_number] logfile] ...
[MAXLOGFILES number]
[MAXLOGMEMBERS number]
[MAXLOGHISTORY number]
[MAXDATAFILES number]
[MAXINSTANCES number]
[ARCHIVELOG or NOARCHIVELOG]
[EXCLUSIVE]
[CHARACTER SET charset]
[NATIONAL CHARACTER SET charset]
DATAFILE file_specification [AUTOEXTEND OFF ON
" WIDTH="14" HEIGHT="9" ALIGN="BOTTOM"
BORDER="0">;[NEXT number K M] [MAXSIZE UNLIMITED number K M]
[, DATAFILE file_specification [AUTOEXTEND OFF ON
" WIDTH="14" HEIGHT="9" ALIGN="BOTTOM"
BORDER="0">;[NEXT number K M] [MAXSIZE UNLIMITED number K M]]

The various parameters and variables are
database--The name of the database to be created. This is up to eight characters long.
CONTROLFILE REUSE--This optional parameter specifies that any existing control files be overwritten with this new information. Without this parameter, the CREATE DATABASE command would fail if the control files exist.
LOGFILE--This parameter is followed by the log-file name. This specifies the name of the redo log file. You can specify the log-file group with the optional GROUP parameter, or a log-file group number will be assigned automatically.
MAXLOGFILE--This parameter specifies a maximum number of log-file groups that can be created for this database.
MAXLOGMEMBER--This parameter specifies a maximum number of log-file members in a log-file group.
MAXLOGHISTORE--This is a parallel-server parameter that specifies a maximum number of archive log files to be used in recovery in a parallel-server environment.
MAXDATAFILE--This parameter specifies the maximum number of files that can be added to a database before the control file automatically expands.
MAXINSTANCES--This parameter specifies a maximum number of instances that the database can have open simultaneously.
ARCHIVELO--This parameter specifies that the database will be run in ARCHIVELOG mode. In ARCHIVELOG mode, a redo log group must be archived before it can be reused. ARCHIVELOG mode is necessary for recovery.
NOARCHIVELO--This parameter specifies that the database will be run in NOARCHIVELOG mode. In NOARCHIVELOG mode, the redo log groups are not archived. This is the default setting.
EXCLUSIVE--This parameter specifies that the database is mounted in EXCLUSIVE mode after it has been created. In EXCLUSIVE mode, only one instance can mount the database.
CHARACTER SET--This parameter specifies that the data in the database will be stored in the charset character set.
NATIONAL CHARACTER SET--This parameter specifies that the National Character Set used to store data in the NCHAR, NCLOB, and NVARCHAR2 columns will use the charset character set.
DATAFILE--This parameter specifies that the file identified by file_specification will be used as a datafile.

File specification is made up of the following:
`filename' SIZE number (K or M)--The file specification is used to define the name and the initial size in K (kilobytes) or M (megabytes) of the datafile.
[REUSE]--This parameter allows you to use the name of an existing file.

The following options are available to the DATAFILE parameter:
AUTOEXTEND OFF--Specifies that the autoextend feature is not enabled.
AUTOEXTEND ON--Specifies that the autoextend feature is enabled.

The following options are available to the AUTOEXTEND ON parameter:
NEXT--Specifies the number K (kilobytes) or M (megabytes) automatically added to the datafile each time it autoextends.
MAXSIZE UNLIMITED--Specifies that the maximum size of the extended datafile is unlimited. It continues to grow until it runs out of disk space or reaches the maximum file size.
MAXSIZEnumber (K or M)--Specifies that the maximum size that the datafile can autoextend to is number K (kilobytes) or M (megabytes).

The CREATE DATABASE command might seem to be quite complex, but it is not really that difficult. It is not necessary to use all the optional parameters, but as you gain experience, you might decide to use them. An example of creating a database is shown here:

CREATE DATABASE dogs CONTROLFILE REUSE
LOGFILE
GROUP 1 ( `d:\database\log1a.dbf', `e:\database\log1b.dbf') SIZE 100K,
GROUP 2 ( `d:\database\log2a.dbf', `e:\database\log2b.dbf') SIZE 100K
DATAFILE `d:\database\data1.dbf' SIZE 10M,
`d:\database\data2.dbf' SIZE 10M AUTOEXTEND ON NEXT 10M MAXSIZE 50M;

It is not necessary to create all the datafiles at database-creation time. In fact, if you are creating a large number of datafiles, it is more efficient to create the datafiles in parallel using ALTER TABLESPACE ADD DATAFILE.

The CREATE DATABASE command serializes its operations. So if you specify two datafiles, the second will not be created and initialized until the first one has completed. The operation of adding datafiles can, however, be accomplished in parallel. This will reduce the time necessary to create the database.

Creating the Catalogs

After the database has been created, two scripts (CATALOG.SQL and CATPROC.SQL) should be run to create the data dictionary views. These views are important to the operation of the system as well as for the DBA. These catalog scripts can be run within the Server Manager by using the @ character to indicate that you are running a SQL script, as shown here:

@D:\ORANT\RDBMS80\ADMIN\CATALOG;
...
Much data returned
...
@D:\ORANT\RDBMS80\ADMIN\CATPROC;
...
Much data returned
...

You will see the SQL script text as it is running. This process is quite time consuming and will display a very large amount of data.


NOTE: Running the CATALOG.SQL and CATPROC.SQL scripts will take a significant amount of time; don't worry if it seems like it is taking forever.
CATALOG.SQL

The Oracle SQL script CATALOG.SQL creates many of the views used by the system and by the DBA. These include the V$ tables that are referenced throughout the book. Also created by this script are the DBA_, USER_, and SYS_ views. Synonyms are also created, and many grants are done by this script. All these views, synonyms, and permissions are very important to the operation of the system.
CATPROC.SQL

The CATPROC.SQL script is also extremely important to the function of the system. This script sets up the database for the procedural option. The CATPROC.SQL script runs many other SQL scripts, including ones that set up permissions, insert stored procedures into the system, and load a number of packages into the database.

If you run the CATALOG.SQL and CATPROC.SQL scripts, your system will be configured and ready to create tables and load the database. Nonetheless, there might be other options you want to set or parameters you want to alter. These can be accomplished through the use of the ALTER DATABASE command, as shown in the next section.

Modifying the Database

Many of the tasks involved in modifying the Oracle database, tablespaces, and datafiles can be done via the Oracle Enterprise Manager tools or through the use of DDL statements via the Oracle Server Manager. Both methods are described in this section. As you will see, the Oracle Enterprise Manager simplifies the task by providing you with choices, but is somewhat limited in functionality.

Modifying the Database with the ALTER DATABASE Command

Modifying the database from Server Manager is accomplished via the ALTER DATABASE command. This command is used to alter various parameters and specifications on the database itself, and can be typed into Server Manager or run as a SQL script. The syntax of the ALTER DATABASE command is as follows.

ALTER DATABASE [database]
[MOUNT [STANDBY DATABASE] [EXCLUSIVE PARALLEL]]
[CONVERT]
[OPEN [RESETLOGS NORESETLOGS]]
[ACTIVATE STANDBY DATABASE]
[ARCHIVELOG NOARCHIVELOG]
[RECOVER recover_parameters]
[ADD LOGFILE [THREAD number] [GROUP number] logfile
[, [GROUP number] logfile] ...]
[ADD LOGFILE MEMBER `filename' [REUSE]
" WIDTH="14" HEIGHT="9" ALIGN="BOTTOM"
BORDER="0">;[, `filename' [REUSE] ...][TO GROUP number] or
" WIDTH="14" HEIGHT="9" ALIGN="BOTTOM"
BORDER="0">;[`filename' [, `filename'] ...]
[, `filename' [REUSE] [, `filename' [REUSE] ...
[TO GROUP number] or [`filename' [, `filename'] ...]]
[DROP LOGFILE [GROUP number] or [`filename' [, `filename'] ...]
[, GROUP number] or [`filename' [, `filename'] ...]]
[DROP LOGFILE MEMBER `filename' [, `filename'] ...]
[CLEAR [UNARCHIVED] LOGFILE
[GROUP number] or [`filename' [, `filename'] ...]
[, GROUP number] or [`filename' [, `filename'] ...]
[UNRECOVERABLE DATAFILE]]
[RENAME FILE `filename' [, `filename'] ... TO `filename' [, `filename'] ...
[CREATE STANDBY CONTROLFILE AS `control_file_name' [REUSE]]
[BACKUP CONTROLFILE
[TO `filename' [REUSE]] or [TO TRACE [RESETLOGS or NORESETLOGS]]
[RENAME GLOBAL NAME TO database[.domain] ...]
[RESET COMPATABILITY]
[SET [DBLOW = value] or [DBHIGH = value] or [DBMAC ON or OFF]]
[ENABLE [PUBLIC] THREAD number]
[DISABLE THREAD number]
[CREATE DATAFILE `filename' [, `filename'] ...
AS filespec [, filespec] ...]
DATAFILE `filename' [, `filename'] ...
ONLINE or OFFLINE [DROP] or RESIZE number (K or M)
or AUTOEXTEND OFF or ON [NEXT number (K or M)] [MAXSIZE UNLIMITED or number
" WIDTH="14" HEIGHT="9" ALIGN="BOTTOM"
BORDER="0">; (K or M)]
or END BACKUP]

The various parameters and variables for the ALTER DATABASE command are as follows:
database--This specifies the name of the database to be created and is a character string up to eight characters in length.
MOUNT--This parameter is used to mount an unmounted database.

The various options to the ALTER DATABASE database MOUNT command are as follows:
MOUNT STANDBY DATABASE--This is used to mount a standby database. The standby database will be described in detail on Days 16, "Understanding Effective Backup Techniques," and 17, "Recovering the Database."
MOUNT EXCLUSIVE--This is used to mount the database in EXCLUSIVE mode. EXCLUSIVE mode specifies that only one instance can mount the database. This is the default mode for the ALTER DATABASE MOUNT command.
MOUNT PARALLEL--This is used to mount the database in PARALLEL mode. PARALLEL mode allows other instances to mount the database in a parallel-server environment.

Other optional parameters to the ALTER DATABASE command are
CONVERT --This option is used to convert an Oracle7 data dictionary to the Oracle8 data dictionary.
OPEN--This parameter opens the database for normal use. Optionally, you can specify the additional parameter RESETLOGS or NORESETLOGS.

The options to the ALTER DATABASE database OPEN command are as follows:
OPEN RESETLOGS--With the RESETLOG parameter set, the redo logs are essentially reset to sequence number 1. This basically discards all information in the redo logs, thus starting over. The RESETLOGS command is required after an incomplete recovery done with the RECOVER UNTIL option of media recovery or after a backup control file. A backup should be taken immediately after an ALTER DATABASE RESETLOGS command. This is described in more detail on Days 16 and 17.
OPEN NORESETLOGS--This is the default operation of the ALTER DATABASE OPEN command, specifying not to reset the redo logs.

Other optional parameters to the ALTER DATABASE command are
ACTIVATE STANDBY DATABASE --This parameter is used to make a standby database into the current active database. The standby database is described in detail on Days 16 and 17.
ARCHIVELO --This specifies that this database is running in ARCHIVELOG mode. In ARCHIVELOG mode, each redo log group is archived to an archive log file before it can be reused. ARCHIVELOG mode is essential for data recovery in the event of media failure.
NOARCHIVELO--This specifies that the database is not running in ARCHIVELOG mode. Running in NOARCHIVELOG mode is very dangerous because media recovery might not be possible. See Days 16 and 17 for more details.
RECOVER--The recovery parameters are shown immediately after this section.
ADD LOGFILE logfile'--This parameter is used to specify the addition of log files named `logfile' to the database. By specifying the THREAD option, you can add this log file to a specific parallel server thread; omitting the THREAD parameter will cause the redo log group to be added to your current instance. You can also specify the value of the GROUP parameter. If you omit the GROUP value, one is assigned automatically. You can specify one or more log-file groups with this parameter.
ADD LOGFILE MEMBER filename'--This parameter adds members named `filename' to existing log-file groups. The optional parameter REUSE must be included if the file `filename' already exists. You specify the group that you are adding to in one of several different ways.

The various options to the ALTER DATABASE database ADD LOGFILE MEMBER command are as follows:
TO GROUP number--This can be used if you know the log-file group identification parameter.
TO GROUP `filename'--You can also add to the log-file group by specifying the name or names of all members of the existing log-file group.

Other optional parameters to the ALTER DATABASE command include
DROP LOGFILE --This parameter drops all members of a log-file group. You specify the group that you are dropping in one of two ways: by specifying the GROUP or by specifying members of the group as described here.

The various options to the ALTER DATABASE database DROP LOGFILE command are as follows:
GROUP number--If you know the group identifier, you can drop the log-file group by specifying it.
`filename'--You can add to the log-file group by specifying the name or names of all members of the existing log-file group.

Other optional parameters to the ALTER DATABASE command are
DROP LOGFILE MEMBER `filename'--This command is used to drop a member or members of a log-file group. The member to be dropped is specified by the log-file member's filename. One or more members can be specified.
CLEAR LOGFILE --This command is used to drop and re-create a log file. This can be used in the event of a problem with an existing log file. By using the optional UNARCHIVED qualifier, you can clear a log file that has logging information in it without having to first archive that logging information. If you use the UNARCHIVED qualifier, you will probably make your database unrecoverable in the event of media failure. You specify the log files that you are clearing in one of two ways: by specifying the GROUP or by specifying members of the group as described here.

The various options to the ALTER DATABASE database CLEAR LOGFILE command are as follows:
GROUP number--If you know the group identifier, you can drop the log-file group by specifying it.
filename'--You can add to the log-file group by specifying the name or names of all members of the existing log-file group.
UNRECOVERABLE DATAFILE--This option to CLEAR LOGFILES is used if the tablespace has a datafile that is offline. This requires that the tablespace and the datafile be dropped after the CLEAR LOGFILES operation has finished.

Other optional parameters to the ALTER DATABASE command are
RENAME FILE `filename' TO `filename'--This command is used to rename datafiles or log files. This only changes the name in the control file, not on disk.
CREATE STANDBY CONTROLFILE AS`control_file_name'--This command is used to create a standby control file called control_file_name. The optional REUSE qualifier allows you to specify the name of an existing file that will be reused.
BACKUP CONTROLFILE --This command is used to create a backup of the control file.

This can be accomplished in the following two ways.

The various options to the ALTER DATABASE database CLEAR LOGFILE command are as follows:
TO `filename'--By assigning the backup control file to a filename, the control file will be backed up to this file. If the file already exists, the optional REUSE qualifier must be used.
TO TRACE--This optional parameter writes SQL to a trace file that can be used to re-create the control files. You can specify the qualifiers RESETLOGS or NORESETLOGS, which will add SQL to open the database with these options. The SQL statements are complete enough to start up the database, re-create the control files, and recover and open the database appropriately.

Tip: By running the ALTER DATABASE database BACKUP CONTROLFILE TO TRACE command after your database has been altered in any way, you will have a method of re-creating the control files if necessary. This is part of a good recovery plan.

Other optional parameters to the ALTER DATABASE command are
RENAME GLOBAL NAME TO--This command allows you to rename the database name, domain name, or both.
RESET COMPATABILITY--This command resets the compatability level of the database to an earlier version of Oracle after the instance is restarted.

SET--The following trusted Oracle parameters are modified via the SET command SET DBLOW = value, SET DBHIGH = value, SET DBMAC ON or OFF. Trusted Oracle is not covered in this book. See the Trusted Oracle Administration Guide from Oracle for more information.
ENABLE [PUBLIC] THREAD number--This parallel-server command is used to enable a thread of redo log groups identified by number. The addition of the PUBLIC qualifier allows this log file thread to be used by any instance.
DISABLE THREADnumber--This command disables a log file thread group identified by number, making it unavailable to any instance.
CREATE DATAFILE filename'--This parameter is used to create a datafile that was lost due to media failure and was not backed up.
ASfilespec--This option of the CREATE DATAFILE command is used to specify the filespec specification parameters.
DATAFILE filename'--The ALTER DATABASE database DATAFILE command has several different functions that allow you to change the state of database datafiles.

The various options to the ALTER DATABASE database DATAFILE `filename' command are as follows:
ONLINE--Brings the datafile online.
OFFLINE [DROP--Takes the datafile offline. When the database is running in NOARCHIVELOG mode, the drop command takes it offline.
RESIZE number(K or M)--This is used to resize a datafile to number K (kilobytes) or M (megabytes).
AUTOEXTEND OFF or ON--This command is used to alter a datafile to have autoextend either on or off. With autoextend on, the file will increase in size based on the AUTOEXTEND parameters.

The various options to the ALTER DATABASE database DATAFILE `filename' AUTOEXTEND ON command are as follows:
NEXT number (K or M)--This option specifies that the database will grow in increments of number K (kilobytes) or M (megabytes) whenever space requirements force the datafile to grow.
MAXSIZE UNLIMITED--This parameter specifies that the maximum size of the datafile is governed only by disk space and OS datafile limitations. On NT, a datafile can grow to 32GB in size.
MAXSIZEnumber (K or M)--This option specifies that the maximum size a datafile will grow to is number K (kilobytes) or M (megabytes).

Another optional parameter to the ALTER DATABASE command is
END BACKUP --This option specifies that media recovery should not be done when an online backup was interrupted by an instance failure.

The parameters and options to the RECOVER clause are

RECOVER [AUTOMATIC] [FROM `path']
[[STANDBY] DATABASE]
[UNTIL CANCEL] or [UNTIL TIME `time']
" WIDTH="14" HEIGHT="9" ALIGN="BOTTOM"
BORDER="0">;or [UNTIL CHANGE number] or [USING BACKUP CONTROLFILE] ...]
[TABLESPACE tablespace [,tablespace] ....]
[DATAFILE `filename' [, `filename'] ....]
[LOGFILE `filename']
[CONTINUE [DEFAULT]]
[CANCEL]
[PARALLEL parallel_definition]

The various parameters and variables for the RECOVER option are
AUTOMATIC--This qualifier specifies that the recovery process automatically figures out the names of the redo log files that it needs to apply in order to perform media recovery.
FROM path'--This qualifier allows you to specify the location of archive log files. This is useful because you do not always keep the archive log files in the directory where they were originally generated.
STANDBY--This recovers the standby database.
DATABASE--This is the default option. It indicates that the database should be recovered.
UNTIL ?--The UNTIL parameters are very important to the recovery of the database if you are recovering from a software or operator problem. These parameters allow you to recover up until a specific point.

The various options to the ALTER DATABASE database RECOVER UNTIL ?? command are as follows:
UNTIL CANCEL--The database will be recovered until you submit an ALTER DATABASE database RECOVER CANCEL command.
UNTIL TIME time'--This command performs a time-based recovery. It recovers all transactions that have finished until `time'. The qualifier is given in the form `YYYY-MM-DD:HH24:MI:SS'. This can be quite useful if you know when the suspected SQL statement that caused the failure occurred.
UNTIL CHANGEnumber--This performs a recovery up until the last transaction before the system change number.

Other optional parameters to the ALTER DATABASE database RECOVER command are
USING BACKUP CONTROLFILE--This specifies that the recovery should be done using a backup control file.
TABLESPACEtablespace--This performs recovery only on the specified tablespace(s).
DATAFILE filename'--This performs recovery only on the specified datafile.
LOGFILE filename'--This performs recovery using the specified log file.
CONTINUE [DEFAULT--This continues recovery after it has been interrupted. CONTINUE DEFAULT is similar, but uses Oracle-generated default values.
CANCEL--This cancels the UNTIL CANCEL-based recovery.
PARALLEL (DEGREEnumber)--This specifies the degree of parallelism to use during the recovery process. The number of parallel processes is determined by the value of number.

The recovery process is key to the stability of Oracle and your database. This topic is covered in much more detail on Days 16 and 17.

Let's look at a few examples of using the ALTER DATABASE command to perform regular maintenance tasks.

Changing to Use ARCHIVELOG Mode

If you are not running in ARCHIVELOG mode, you are in danger of losing data in the event of a system failure. To alter the database to run in ARCHIVELOG mode, use the following syntax:

ALTER DATABASE dogs ARCHIVELOG;


Performing a Timed Recovery

It is sometimes necessary to perform a timed recovery. If a certain SQL statement caused a system failure, you should recover until just before that statement was issued. If a SQL statement that caused data loss was inadvertently run, you can recover until just before that statement was issued. Here is an example of how to perform a timed recovery:

ALTER DATABASE dogs RECOVER UNTIL TIME `1999-07-04:15:03:00';

This statement recovers the database until 3:03 p.m. on July 4, 1999.

Open a Closed Database

Databases are often brought up and mounted but not opened for maintenance. To open a closed database, use the following syntax:

ALTER DATABASE dogs OPEN;

Backing Up a Control File

Backing up control files is an important operation. Here is an example of how to use ALTER DATABASE to back up your control files:

ALTER DATABASE dogs BACKUP CONTROLFILE TO `C:\backup\cntrlDOGS.dbf;

Backing Up a Control File to Trace

Backing up your control file to trace generates a SQL script that can be used to re-create the control file in the event of an emergency recovery. Use this syntax:

ALTER DATABASE dogs BACKUP CONTROLFILE TO TRACE;

Followup

Even after the database and datafiles have been created, your job is not over. You must watch the system carefully to make sure that you don't run out of space or other resources. As you saw on Day 4, capacity planning and sizing are not easy jobs. By anticipating and solving problems before they become critical, you will avoid costly setbacks. You must periodically monitor the system from the OS and the Oracle perspectives to avoid these types of problems.

Monitoring the Datafiles

To make sure you are not running out of space, you can use Enterprise Manager's Storage Manager utility. If you click the Datafiles icon on the left, you will see a list of datafiles, the size of each file, and how much it is used on the right. This is a quick and easy way of determining whether you are running out of space in your datafiles. You can manually check this by looking at several different system views and by adding up individual free spaces. The Oracle Storage Manager simplifies this task.

Load Balancing

It is important that you not overdrive any of the disk drives or disk arrays in your system. This can severely hurt your performance. The I/O rates at which your system is running can be monitored with the NT Performance Monitor. I will not spend much time on the Performance Monitor, but I do want to mention a few points that you should watch out for:
Use diskperf--Turn on diskperf by using the NT command diskperf -y. By turning on diskperf, you will see much more information about your disk I/O rates when you run perfmon.
Monitor I/O--Use perfmon to look at PhysicalDisk. Of great importance is the reads and writes per second (throughput) and the seconds/read and seconds/write (latency).

If you see a disk drive or disk array (a disk array looks like a big disk drive to the OS) that has many more I/Os per second per disk than the others, you might have a balance problem.


TIP: The I/Os per disk drive per second should not exceed 60-79 on the data volumes. On an array, divide the number of I/Os by the number of drives to get the I/Os per drive.
A typical disk drive should take about 20-30 milliseconds to read or write to the drive. If your seconds/read or seconds/write is much higher, you are probably overloading your disks.