Topic: How to defragment heaps in a database?
Colleagues, I welcome!
There is big, but badly designed DB.
In it is available about ten tables - heaps, without clustered index and primary key.
Including there are some tables to which there is intensive enough insertion and removal of the data.
At Dmitry Piljugina I read that with removal from a heap - not all so is simple, and on good it is necessary to do it as delete from table1 with (tablock), otherwise pages on which remote data are allocated appear on former arranged for the appropriate table, and repeatedly - are not used (if I correctly understood all, if was not present - correct me).
In this connection questions:
1. And how to estimate such "place loss" in a DB file?
2. What it is necessary to make to correct a situation? Whether will enough make Alter table table1 rebuild?
3. The most part of tables such are tables of type GUID + one or the several BLOB fields (nvarchar (max) and varbinary (max)). The pages occupied BLOB too will "be lost"?