Fastest way to populate full text search index - sql-server

I have a large table with millions of records and this table has one large string column where could be 100k+ symbols. And I'm applying full-text search index to it
Full-text search queries work great but I have problem with indexing. We generate quite a lot of content and place it in this table. And we found ourselves in situation when we generate content faster than SQL Server actually indexes it. and with time we have more and more rows that were not actually indexed
I know there are several ways to configure process of indexing: automatic (based on changes tracking) and using incremental population
So, the question is how would I perform indexing in the most efficient way?
Is there any reasons to believe that incremental population is actually faster than automatic with changes tracking?

Related

How do I find which indexes are being used and what query is using the index?

I am using SQL Server 2008. I have tables on which there are duplicate indexes (basically indexes with same definition). I wanted to know if its possible to find out which queries are using these indexes? I don't know why the duplicate indexes were created in the first place. So before removing them, I want to identify any queries which are using them.
One more question is in above cases, how does SQL Server engine determine which index to use? What's the performance impact of this?
Thanks
aski
If you have indexes in your database that are exact duplicates, delete them, period. No harm can come from removing the duplicates, but harm CAN come from the duplicates existing.
The fact that SQL Server even allows duplicate indexes to be created in the first place is ridiculous.
Here is an article on how to find unused (and missing) indexes:
http://weblogs.sqlteam.com/mladenp/archive/2009/04/08/SQL-Server---Find-missing-and-unused-indexes.aspx
If the indexes are truly duplicate, then it shouldn't matter what queries are using them. If you remove one, the query should use the other (unless there is a query hint that specifies an index name, which is rare).
Just make sure the indexes are truly duplicates:
Indexed field order matters. An index on (fname, lname) is not the same as an index on (lname, fname)
An index on (lname) is reduntant if you already have one on (lname, fname, ...other fields) (Going from the left only, order matters here, too)
Check included fields. Similar looking indexes might have different included fields to cover different queries (although you could probably still consolidate these by making one index with all included fields)
Other properties of the index might cause one index to behave slightly different than the other (clustered? Unique? Fill Factor? Max degree of parallelism? Filegroup? Auto-recompute stats?) (You still probably wouldn't need 2 different indexes, but worth understanding the differences, anyway)
With the DMV (dynamic management views), you can definitely find out which indices aren't being used - those that haven't been used in a long time could be dropped.
Check out:
Find Indexes not in use
General intro to Dynamic Management Views
One more question is in above cases,
how does SQL Server engine determine
which index to use?
That's a pretty complicated process - the SQL Server query optimizer will use statistics and other methods to figure out which indices would be helpful for a given query. Which one it'll pick when you have two identical ones, is a tricky question.... I don't know, quite honestly.
What's the performance impact of this?
Of having duplicate identical indices? Those need to be maintained on any INSERT, UPDATE, and DELETE operation that touches on the columns in the index - so you're definitely paying a performance penalty for this.

Table design and performance related to database?

I have a table with 158 columns in SQL Server 2005.
any disdvantage of keeping so many columns.
Also I have to keep those many columns,
how can i improve performance - like using SP's, Indexes?
Wide tables can be quite performant when you usually want all the fields for a particular row. Have you traced your user's usage patterns? If they're usually pulling just one or two fields from multiple rows then your performance will suffer. The main issue is when your total row size hits the 8k page mark. That means SQL has to hit the disk twice for every row (first page + overflow page), and thats not counting any index hits.
The guys at Universal Data Models will have some good ideas for refactoring your table. And Red Gate's SQL Refactor makes splitting a table heaps easier.
There is nothing inherently wrong with wide tables. The main case for normalization is database size, where lots of null columns take up a lot of space.
The more columns you have, the slower your queries will be.
That's just a fact. That isn't to say you aren't justified in having many columns. The above does not give one carte blanche to split one entity's worth of table with many columns into multiple tables with fewer columns. The administrative overhead of such a solution would most probably outweigh any perceived performance gains.
My number one recommendation to you, based off of my experience with abnormally wide tables (denormalized schemas of bulk imported data) is to keep the columns as thin as possible. I had to work with a lot of crazy data and left most of the columns as VARCHAR(255). I recommend against this. Although convenient for development purposes, performance would spiral out of control, especially when working with Perl. Shrinking the columns to their bare minimum (VARCHAR(18) for instance) helped immensely.
Stored procedures are just batches of SQL commands; they don't have any direct on speed other than that regular use of certain types of stored procedures will end up using cached query plans (which is a performance boost).
You can use indexes to speed up certain queries, but there's no hard and fast rule here. Good index design depends entirely on the type of queries you're running. Indexing will, by definition, make your writes slower; they exist only to make your reads faster.
The problem with having that many columns in a table is that finding rows using the clustered primary key can be expensive. If it were possible to change the schema, breaking this up into many normalized tables will be the best way to improve efficiency. I would strongly recommend this course.
If not, then you may be able to use indices to make some SELECT queries faster. If you have queries that only use a small number of the columns, adding indices on those columns could mean that the clustered index will not need to be scanned. Of course, there is always a price to pay with indices, in terms of storage space and INSERT, UPDATE and DELTETE time, so this may not be a good idea for you.

Is a globally partitioned index better (faster) than a non-partitioned index?

I'm interested to find out if there is a performance benefit to partitioning a numeric column that is often the target of a query. Currently I have a materialized view that contains ~50 million records. When using a regular b-tree index and searching by this numeric column I get a cost of 7 and query results in about 0.8 seconds (with non-primed cache). After adding a global hash partition (with 64 partitions) for that column I get a cost of 6 and query results in about 0.2 seconds (again with non-primed cache).
My first reaction is that the partitioned index has improved the performance of my query. However, I realize that this may just be a coincidence and could be totally dependent on the values being searched on, or others I'm not aware of. So my question is: is there a performance benefit to adding a global hash partition to a numeric column on a large table or is the cost of determining which index partitions to scan out-weighed by the cost of just doing a full range scan on a non-indexed partition?
I'm sure this, like many Oracle questions, can be answered with an "it depends." :) I'm interested in learning what factors I should consider to determine the benefits of each approach.
Thanks!
I'm pretty sure you have found this reference in your research - Partitioned Tables and Indexes. However I give a link to it if somebody is interested, this is a very good material about partitioning.
Straight to the point - Partitioned index just decomposes the index into pieces (16 in your situation) and spread the data depending on their hashed partitioning key. When you want to use it, Oracle "calculates" the hash of the key and determine in which section to continue with searching.
Knowing how index searching works, on really huge data I think it is better to choose the partitioned index in order to decrease the index tree you traverse (regular index). It really depends on the data, which is in the table (how regular index tree is composed) and is hashing and direct jump to lower node faster than regular tree traverse from the start node.
Finally, you must be more confident with the test results. If one technique gives better results on your exact data than some other don't worry to implement it.

Database indexes: Only selects!

Good day,
I have about 4GB of data, separated in about 10 different tables. Each table has a lot of columns, and each column can be a search criteria in a query. I'm not a DBA at all, and I don't know much about indexes, but I want to speed up the search as much as possible. The important point is, there won't be any update, insert or delete at any moment (the tables are populated once every 4 months). Is it appropriate to create an index on each and every column? Remember: no insert, update or delete, only selects!
Also, if I can make all of these columns integer instead of varchar, would i make a difference in speed?
Thank you very much!
Answer: No. Indexing every column separately is not good design. Indexes need to comprise multiple columns in many cases, and there are different types of indexes for different requirements.
The tuning wizard mentioned in other answers is a good first cut (esp. for a learner).
Don't try to guess your way through it, or hope you understand complex analyses - get advice specific to your situation. We seem to have several threads going here that are quite active for specific situations and query optimization.
Have you looked at running the Index Tuning Wizard? Will give you suggestions of indexes based on a workload.
Absolutely not.
You have to understand how indexes work. If you have a table of say, 1000 records, but it's a BIT and there can be one of two values, if you index on that column and that column only, it will be worthless, because it will not be selective enough. When you index on a column, be very cognizant of what types of selects are going to be done on the table. When you create an index on a column, will that index be selective enough for the optimizer to use effectively?
To that point, you may very well find that a few carefully selected composite indexes will vastly outperform the solution of many single indexes on each column. The golden rule: how the database is queried will determine how you should make your indexes.
Two pieces of missing information: how many distinct values are in each column, and which DBMS you're using. If you're using Oracle and have less than a few thousand distinct values per column, you can create bitmap indexes. These are very space- and execution-efficient for exact matches.
Otherwise, it's a tradeoff: each index will add roughly the same amount of space as a one-column name containing the same data, so you'll essentially double (probably 2.5x) your space requirements. So maybe 10G, which isn't a whole lot of data.
Then there's the question of whether your DBMS will efficiently merge multiple index-based selects. It's quite possible that it won't, unless you do self-joins for every column that you're selecting against.
Best answer: try it on a smaller dataset (so that you're not spending all your time building the indexes) and see how it works.
If you are selecting a set of columns from the table greater than those covered by the columns in the selected indexes, then you will inevitably incur a bookmark lookup in the query plan, which is where the query processor has to retrieve the non-covered columns from the clustered index using the reference ID from leaf rows in the associated non-clustered index.
In my experience, bookmark lookups can really kill query performance, due to the volume of extra reads required and the fact that each row in the clustered index has to be resolved individually. This is why I try to make NC indexes covering wherever possible, which is easier on smaller tables where the required query plans are well-known, but if you have large tables with lots of columns with arbitrary queries expected then this probably won't be feasible.
This means you only get bang for your buck with an NC index of any kind, if the index is covering, or selects a small-enough data set that the cost of a bookmark lookup is mitigated - indeed, you may find that the query optimizer won't even look at your indexes if the cost is prohibitive compared to a clustered index scan, where all the columns are already available.
So there is no point in creating an index unless you know that index will optimize the result of a given query. The value of an index is therefore proportional to the percentage of queries that it can optimize for a given table, and this can only be determined by analyzing the queries that are being executed, which is exactly what the Index Tuning Wizard does for you.
so in summary:
1) Don't index every column. This is classic premature optimization. You cannot optimize a large table with indexes for all possible query plans in advance.
2) Don't index any column, until you have captured and run a base workload through the Index Tuning Wizard. This workload needs to be representative of the usage patterns of your application, so that the wizard can determine what indexes would actually help the performance of your queries.

how to Deal with billions of data in sql server?

HI guys i have a sql server 2008 database along with 30000000000 records in its one major table. now we are looking for the performance for our queries. we have done with all indexes. i found that we can split our database tables into multiple partitions so that the data will be spread over multiple files and will increes the performance of the query.
but unfortunatly this functioning is only available in sql server enterprise edition. this make unafortable to us.
could you guys suggest any other way to maintain and query performance.
eg. select * from mymajortable where date between '2000/10/10' and '2010/10/10'
this query takes around 15 min to retrieve around 10000 records.
A SELECT * will obviously be less efficiently served than a query that uses a covering index.
First step: examine the query plan and look for and table scans and the steps taking the most effort(%)
If you don’t already have an index on your ‘date’ column, you certainly need one (assuming sufficient selectivity). Try to reduce the columns in the select list, and if ‘sufficiently’ few, add these to the index as included columns (this can eliminate bookmark lookups into the clustered index and boost performance).
You could break your data up into separate tables (say by a date range) and combine via a view.
It is also very dependent on your hardware (# cores, RAM, I/O subsystem speed, network bandwidth)
Suggest you post your table and index definitions.
First always avoid Select * as that will cause the select to fetch all columns and if there is an index with just the columns you need you are fetching a lot of unnecessary data. Using only the exact columns you need to retrieve lets the server make better use of indexes.
Secondly, have a look on included columns for your indexes, that way often requested data can be included in the index to avoid having to fetch rows.
Third, you might try to use an int column for the date and convert the date into an int. Ints are usually more effective in range searches than dates, especially if you have time information to and if you can skip the time information the index will be smaller.
One more thing to check for is the Execution plan the server uses, you can see this in management studio if you enable show execution plan in the menu. It can indicate where the problem lies, you can see which indexes it tries to use and sometimes it will suggest new indexes to add.
It can also indicate other problems, Table Scan or Index Scan is bad as it indicates that it has to scan through the whole table or index while index seek is good.
It is a good source to understand how the server works.
If you add an index on date, you will probably speed up your query due to an index seek + key lookup instead of a clustered index scan, but if your filter on date will return too many records the index will not help you at all because the key lookup is executed for each result of the index seek. SQL server will then switch to a clustered index scan.
To get the best performance you need to create a covering index, that is, include all you columns you need in the "included columns" part of your index, but that will not help you if you use the select *
another issue with the select * approach is that you can't use the cache or the execution plans in an efficient way. If you really need all columns, make sure you specify all the columns instead of the *.
You should also fully quallify the object name to make sure your plan is reusable
you might consider creating an archive database, and move anything after, say, 10-20 years into the archive database. this should drastically speed up your primary production database but retains all of your historical data for reporting needs.
What type of queries are we talking about?
Is this a production table? If yes, look into normalizing a bit more and see if you cannot go a bit further as far as normalizing the DB.
If this is for reports, including a lot of Ad Hoc report queries, this screams data warehouse.
I would create a DW with seperate pre-processed reports which include all the calculation and aggregation you could expect.
I am a bit worried about a business model which involves dealing with BIG data but does not generate enough revenue or even attract enough venture investment to upgrade to enterprise.

Resources