Take a look at MOS Note 1356604.1
CAUSE
There are 2 possible causes for this :
1. Documented and designed behaviour due to explicit forcing an archive creation before the redolog file is full
SQL> alter system switch logfile;
SQL> alter system archive log current;
RMAN> backup archivelog all;
RMAN> backup database plus archivelog;
ARCHIVE_LAG_TARGET : limits the amount of data that can be lost and effectively increases the availability of the standby database by forcing a log switch after the specified amount of time elapses. you can see this aswell in RAC with an idle/low-load instance.
2. Undocumented, but designed behaviour :
BUG 9272059 - REDO LOG SWITCH AT 1/8 OF SIZE DUE TO CMT CPU'S
BUG 10354739 - REDOLOGSIZE NOT COMPLETLY USED
BUG 12317474 - FREQUENT REDO LOG SWITCHES GENERATING SMALL SIZED ARCHIVELOGS
BUG 5450861 - ARCHIVE LOGS ARE GENERATED WITH A SMALLER SIZE THAN THE REDO LOG FILES
BUG 7016254 - DECREASE CONTROL FILE ENQUEUE WAIT AT LOG SWITCH
Explanation :
As per Bug: 5450861 (closed as 'Not a Bug'):
* The archive logs do not have to be even in size. This was decided a very long time ago,
when blank padding the archive logs was stopped, for a very good reason - in order to save disk space.
* The log switch does not occur when a redo log file is 100% full. There is an internal algorithm
that determines the log switch moment. This also has a very good reason - doing the log switch
at the last moment could incur performance problems (for various reasons, out of the scope of this note).
As a result, after the log switch occurs, the archivers are copying only the actual information from the
redo log files. Since the redo logs are not 100% full after the log switch and the archive logs are
not blank padded after the copy operation has finished, this results in uneven, smaller files than
the original redo log files.
There are a number of factors which combine to determine the log
switch frequency. These are the most relevant factors in this case:
a) RDBMS parameter LOG_BUFFER_SIZE
If this is not explicitly set by the DBA then we use a default;
at instance startup the RDBMS calculates the number of shared redo
strands as ncpus/16, and the size of each strand is 128Kb * ncpus
(where ncpus is the number of CPUs in the system). The log buffer
size is the number of stands multiplied by the strand size.
The calculated or specified size is rounded up to a multiple of the granule size
of a memory segment in the SGA. For 11.2 if
SGA size >= 128GB then granule size is 512MB
64GB <= SGA size < 128GB then granule size is 256MB
32GB <= SGA size < 64GB then granule size is 128MB
16GB <= SGA size < 32GB then granule size is 64MB
8GB <= SGA size < 16GB then granule size is 32MB
1GB <= SGA size < 8GB then granule size is 16MB
SGA size < 1GB then granule size is 4MB
There are some minimums and maximums enforced.
b) System load
Initially only one redo strand is used, ie the number of "active"
redo strands is 1, and all the processes copy their redo into
that one strand. When/if there is contention for that strand then
the number of active redo strands is raised to 2. As contention
for the active strands increases, the number of active strands
increases. The maxmum possible number of active redo strands is
the number of strands initially allocated in the log buffer.
(This feature is called "dynamic strands", and there is a hidden
parameter to disable it which then allows processes to use all
the strands from the outset).
c) Log file size
This is the logfile size decided by the DBA when the logfiles are created.
d) The logfile space reservation algorithm
When the RDBMS switches into a new online redo logfile, all the
log buffer redo strand memory is "mapped" to the logfile space.
If the logfile is larger than the log buffer then each strand
will map/reserve its strand size worth of logfile space, and the
remaining logfile space (the "log residue") is still available.
If the logfile is smaller than the log buffer, then the whole
logfile space is divided/mapped/reserved equally among all the
strands, and there is no unreserved space (ie no log residue).
When any process fills a strand such that all the reserved
underlying logfile space for that strand is used, AND there is
no log residue, then a log switch is scheduled.
Example : 128 CPU's so the RDBMS allocates a
log_buffer of size 128Mb containing 8 shared strands of size 16Mb.
It may be a bit larger than 128Mb as it rounds up to an SGA granule boundary.
The logfiles are 100Mb, so when the RDBMS switches into a
new online redo logfile each strand reserves 100Mb/8 = 25600 blocks
and there is no log residue. If there is low system load, only one
of the redo strands will be active/used and when 25600 blocks of
that strand are filled then a log switch will be scheduled - the created
archive logs have a size around 25600 blocks.
With everything else staying the same (128 cpu's and low load),
using a larger logfile would not really reduce the amount of
unfilled space when the log switches are requested, but it would
make that unfilled space less significant as a percentage of the
total logfile space, eg
- with a 100Mb logfile, the log switch happens with 7 x 16Mb
logfile space unfilled (ie the logfile is 10% full when the
log switch is requested)
- with a 1Gb logfile, the log switch would happen with 7 x 16Mb
logfile space unfilled (ie the logfile is 90% full when the
log switch is requested)
With a high CPU_COUNT, a low load and a redo log file size smaller than
the redolog buffer, you may see small archived log files because of log switches
at about 1/8 of the size of the define log file size.
This is because CPU_COUNT defines the number of redo strands (ncpus/16).
With a low load only a single strand may be used. With redo log file size smaller
than the redolog buffer, the log file space is divided over the available strands.
When for instance only a single active strand is used, a log switch can already occur
when that strand is filled.