1

Topic: Clob to blob

All greetings!
Prompt how better to convert a column from clob in blob
The table , 140 million records, nearby 2
I created blob a column and began to convert packs on 10000

DECLARE
JOB_PREFIX VARCHAR2 (50): = ' CLOB_TO_BLOB _ ';
v_rows_count int;
v_part_count int;
v_commit_count int: = 10000;
BEGIN
select count (*) into v_rows_count from big_table;
v_part_count: = greatest (round (v_rows_count/v_commit_count), 1);
for i in 0. v_part_count loop
DBMS_SCHEDULER.CREATE_JOB (JOB_NAME => JOB_PREFIX || i,
JOB_TYPE => ' PLSQL_BLOCK ',
JOB_ACTION =>
' update big_table t set
t.col_blob = clob_to_blob (t.col_clob)
where MOD (ORA_HASH (t.rowid), ' || v_part_count || ') = ' || i || ';
commit; ';
START_DATE => SYSDATE;
ENABLED => TRUE);
end loop;
END;
/

But  fall with snapshot too old: rollback segment number 37 with name "_SYSSMU37_1974413906$" too small
Whether there is a method easier?

2

Re: Clob to blob

Pir;
It.
I would suggest to create still a label big_table_2 without indexes
On big_table  indexes and  in big_table_2 bulk insert select that not  rollback
Then  big_table also you interpose reversely already  from big_table_2 bulk insert select

3

Re: Clob to blob

Pir;
At you the table , and you it  moreover and with dynamic SQL .
1) the Most simple variant - beat transactions on partitions. In an update explicitly fill out a partition name - them you will receive the list from the dictionary.
2) If the size of partitions is too great - that  and beat transactions on groups of extents. Dba_extents + dbms_rowid to you in the help.

4

Re: Clob to blob

Wandering Ajtist;
There there are nuances. On the table foreign key which not to remove. Since it is connected to the parent table through referential .
And generally it is the industrial table with archive of documents for 5 years. I am not ready to lose it)

5

Re: Clob to blob

Pir wrote:

where MOD (ORA_HASH (t.rowid), ' || v_part_count || ') = ' || i || ';

Here this condition according to the declared logic and filling of variables incorrectly: there can be collisions when on 1 field gets 2  ( see here ) - therefore and falls with snapshot too old.
it is better to create within the limits of 1 partition ( a condition) and if you take the counter on kol-vu lines,  here to what, use rownum
For acceleration use forall update
Example:

declare
type t_cu_rou is table of rowid index by pls_integer;
col_cu_rou t_cu_rou;
rowallc pls_integer;
begin
select count (*)
into rowallc
from big_table c
where a partition condition;
for j in 1. (rowallc / 100000 + 1) loop
select rowid bulk collect
into col_cu_rou
from big_table c
where a partition condition
and rownum <100000;
forall k_idx in col_cu_rou.first. col_cu_rou.last
update big_table t set
t.col_blob = clob_to_blob (t.col_clob)
where rowid = col_cu_rou (k_idx);
end loop;
end;

6

Re: Clob to blob

Fogel;
What is "a partition condition" and how to make so that the code did not update in a cycle same (100000 - 1) records?