Hello world!

July 13, 2007

Welcome to WordPress.com. This is your first post. Edit or delete it and start blogging!

Congratulations! I have upgraded EBS 11.5.7 to 11.5.10.2

July 13, 2007
After 3 weeks hard working with patching/backup, It’s finally over!

Company EBS from 11.5.7 with database 9.0.1.4 has been upgraded to 11.5.10.2 with database 9.2.0.8.

Last year, I have done similar thing in Canon, upgrading from 11.5.9 to 10.5.10.2, but this time is two tiers.

Research on SGA_MAX_SIZE

July 12, 2007

The sga_max_size parameter is used to indicate the maximum overall size of the SGA for the lifetime of the instance. You can dynamically alter the size of the buffer cache, shared pool, large pool, streams pool or the java pool but only to the extent that the sum of these memory areas and the size of other components like fixed SGA, variable SGA, log buffer, keep and recycle buffer caches, if configured and Non-Standard block size buffer caches, if configured, do not exceed the value specified by SGA_MAX_SIZE.

The sga_max_size default is the total size of the configured SGA at instance startup. If sga_max_size is set to a smaller value than the amount of memory initially allocated at instance startup, sga_max_size defaults to the total amount of memory initally allocated. This parameter cannot be dynamically changed. Therefore, make sure it is set correctly in case you will ever want to increase overall SGA memory use. Note that this parameter will cause Oracle to reserve memory of an amount of sga_max_size on most operating systems. Use caution when setting this parameter to avoid causing swapping or paging.

The change in the amount of physical memory consumed when SGA_TARGET is modified depends on the operating system. On some UNIX platforms that do not support dynamic shared memory, the physical memory in use by the SGA is equal to the value of the SGA_MAX_SIZE parameter. On such platforms, there is no real benefit in setting SGA_TARGET to a value smaller than SGA_MAX_SIZE. Therefore, setting SGA_MAX_SIZE on those platforms is not recommended.

On other platforms, such as Solaris and Windows, the physical memory consumed by the SGA is equal to the value of SGA_TARGET.

If you do not need it, do not install it

July 11, 2007

After i have upgrade the database from 9.0.1.4 to 9.2.0.8, which is used for E-Business Suite 11.5.10, I choose to install the HTTP Server, actually it’s not required.

Problem comes…..

“iostat -xnp 5 100” indicates that average service time for disk is 400MS, what is the cause? software raid? bug? hardware fault?

After one week, I have noticed that http process is running as the database owner, when i check the error log, the pizzle is solved.

[Wed Jul 11 15:29:33 2007] [notice] child pid 23876 exit signal Segmentation Fault (11), possible coredump in /11i/app/oracle/product/9.2.0/Apache/Apache
[Wed Jul 11 15:29:33 2007] [notice] child pid 23879 exit signal Segmentation Fault (11), possible coredump in /11i/app/oracle/product/9.2.0/Apache/Apache
[Wed Jul 11 15:29:35 2007] [notice] child pid 23880 exit signal Segmentation Fault (11), possible coredump in /11i/app/oracle/product/9.2.0/Apache/Apache
[Wed Jul 11 15:29:36 2007] [notice] child pid 23883 exit signal Segmentation Fault (11), possible coredump in /11i/app/oracle/product/9.2.0/Apache/Apache
[Wed Jul 11 15:29:36 2007] [notice] child pid 23882 exit signal Segmentation Fault (11), possible coredump in /11i/app/oracle/product/9.2.0/Apache/Apache
[Wed Jul 11 15:29:37 2007] [notice] child pid 23885 exit signal Segmentation Fault (11), possible coredump in /11i/app/oracle/product/9.2.0/Apache/Apache
[Wed Jul 11 15:29:37 2007] [notice] child pid 23886 exit signal Segmentation Fault (11), possible coredump in /11i/app/oracle/product/9.2.0/Apache/Apache
[Wed Jul 11 15:29:37 2007] [notice] child pid 23887 exit signal Segmentation Fault (11), possible coredump in /11i/app/oracle/product/9.2.0/Apache/Apache
[Wed Jul 11 15:29:37 2007] [notice] child pid 23884 exit signal Segmentation Fault (11), possible coredump in /11i/app/oracle/product/9.2.0/Apache/Apache
[Wed Jul 11 15:29:40 2007] [notice] child pid 23889 exit signal Segmentation Fault (11), possible coredump in /11i/app/oracle/product/9.2.0/Apache/Apache

Every 2 seconds, coredump is generated by apachectl process. After stop the Apache JServer, problem solved.

2 things can be learned here:
a) Every issue gets reason/cause.
b) If the feature is not going to used in any case, do not install it.

I have searched this message in metalink also, a few bugs/mis-configuration caused that.

Segment Type ‘SPACE HEADER’

July 10, 2007

Today I have noticed after convert tablespace from OFA to OATM, following segment_type appears in the APPLSYSD tablespace. After doing the research, it’s there for long time already.


SQL> col owner for a10
SQL> col segment_name for a10
SQL> col segment_type for a20
SQL> select owner,segment_name, segment_type from dba_segments
2 where tablespace_name=’APPLSYSD’
3 /
OWNER SEGMENT_NA SEGMENT_TYPE

———- ———- ——————–
SYS 8.42 SPACE HEADER
SQL>

Cause
One of the circumstances under which a ‘SPACE HEADER’ segment gets created is if a ‘dictionary managed’ tablespace is migrated to ‘locally managed’ (see dbms_space_admin.tablespace_migrate_to_local()).The space header segment contains the extent bitmap and is allocated during the migration of the tablespace. Since there is no reserved space after the file header (as with locally managed tablespaces) the bitmap segment will be allocated somewhere in the “data” area of the datafile. During its creation the segment will pick up some of the storage attributes (e.g. MAXEXTENTS) from the default storage clause of the tablespace. Once the segment has been created it can neither be dropped nor changed.

Fix
You can ignore these “left-over” objects. Please go ahead and drop old tablespaces.

More information: Notes: 271866.1

Strange things in Solaris 10 (bug?)

July 9, 2007

Today i have one strange things, which causes “rm -rf ” does not work any more.

$ uname -a
SunOS cashaps2 5.10 Generic_118833-24 sun4u sparc
SUNW,Sun-Fire-V440

$ pwd
/11i/appl/fnd/11.5.0/lib
$ rm -rf temp
$ ls -ld temp
drwxrwxrwx 2 apptest1 dba 512 Jul 9 08:52
temp
$ /usr/xpg4/bin/rm -rf temp
rm: cannot determine if this is an ancestor of the current working directory temp

The solution to fix this issue is also unreasonable.

Before fix:

root@cashaps2 # ls -l /11i
total 1042
drwxr-x— 2 root root 512 Jun 29 12:08 11i


Apply the fix :

root@cashaps2 # umount /11i
root@cashaps2 # chmod 777 11i
root@cashaps2 # ls -l /11i
drwxrwxrwx 2 root root 512 Jun 29 12:08 11i

After change the mount point permission to 777, this problem disappeared.

How to identify your Solaris is 32bit or 64bit

May 18, 2007

Run the command

isainfo -v

If the system is running in 32 bit mode, you will see the following output:

32-bit sparc applications

On a 64 bit Solaris system, you’ll see:

oracle@ids01 $ isainfo -v
64-bit sparcv9 applications
32-bit sparc applications


NAME
isainfo – describe instruction set architectures
SYNOPSIS
isainfo [ -v ] [ -b -n -k ]
DESCRIPTION
The isainfo utility is used to identify various attributes
of the instruction set architectures supported on the
currently running system. Among the questions it can answer
are whether 64-bit applications are supported, or whether
the running kernel uses 32-bit or 64-bit device drivers.
When invoked with no flags, isainfo prints the name(s) of
the native instruction sets for applications supported by
the current version of the operating system. These will be a
subset of the list returned by isalist(1). The subset
corresponds to the basic applications environments supported
by the currently running system.

SATA versus FC disks

May 15, 2007
Without any doubt, technical characteristics and performance of FC disks remain for nowsuperior to those of SATA disks. However, not all storage applications require the superiorfeatures of Fibre Channel. When used for the appropriate enterprise applications, SATA disksoffers a tremendous cost advantage over FC. First, SATA drives are cheaper to manufactureand because of their larger individual capacity, SATA drives are on average sixty percentcheaper per gigabyte than FC disks. The fact is that in large capacity systems, the drivesthemselves account for the vast majority of the cost of the system. Using SATA disks willsubstantially reduce the TCO of the storage system.

Storage data can reside at three different locations within the network storage hierarchy. Thisis also known as tiered storage, shown in Figure below.

Particular data types are suitable for storage at the various levels.

1) Online (primary) storage

Best suited for business critical applications that require constant instantaneous access todata, such as databases and frequently accessed user data. This data requirescontinuous availability and typically has high performance requirements. Business-criticaldata will be stored on Fibre Channel disk implemented in enterprise-class storagesolutions.

2) Near-Line (secondary) storage

Used for business important applications that require quicker access compared with offlinestorage (as tape), but do not require the continuous, instantaneous access provided byonline storage. Secondary storage represents a large percentage of a company’s data andis an ideal fit for SATA technology.

3) Offline (archival) storage

Used for applications where infrequent serial access is required, such as backup forlong-term storage. For this type of storage, tape remains the most economical solution.

First Ever Live Virtual Virtualization show from IBM and VMware – FREE

May 14, 2007
Need to reduce complexity?

Cut costs?

Improve your backup and retention strategy?

Cut your PC management costs?

Want to SEE Virtualization in action?

Do you know what is the difference between Grid and Virtualization?

Want to explore more on Storage Virtualization?

Here is the free event for you.
http://events.unisfair.com/index.jsp?eid=183&seid=10

Chipkill – ECC memory technology from IBM

May 8, 2007
In computer memory systems, Chipkill is IBM’s trademark for a form of advanced Error Checking and Correcting (ECC) computer memory technology that protects computer memory systems from any single memory chip failure as well as multi-bit errors from any portion of a single memory chip. It performs this function by scattering the bits of an ECC word across multiple memory chips, such that the failure of any one memory chip will affect only one ECC bit. This allows memory contents to be reconstructed despite the complete failure of one chip. The equivalent system from HP is called Chipspare.

Chipkill is frequently combined with dynamic bit-steering, so that if a chip fails (or has exceeded a threshold of bit errors), another, spare, memory chip is used to replace the failed chip. The concept is similar to that of RAID, which protects against disk failure, except that now the concept is applied to individual memory chips. The technology was developed by the IBM Corporation in the early and middle 1990s. An important RAS feature, Chipkill technology is deployed primarily on SSDs, mainframes and midrange Unix or Linux servers.

— From Wikipedia, the free encyclopedia

For more information, please read this white paper on the benefits of Chipkill-Correct ECC for PC Server main memory: http://www.ibm.com/servers/eserver/pseries/campaigns/chipkill.pdf