A B-tree index keeps keys in sorted order in a tree of pages. The database navigates from the root to a leaf by comparing values, touching only a handful of pages even for huge tables.
An index helps most when the filter matches a small fraction of rows (high selectivity). If a predicate matches, say, 40% of the table, the engine may decide it’s cheaper to scan than to bounce between many index entries and table pages.
Often the index stores (key, row_pointer). After finding matching keys in the index, the database may do lookups to fetch the full rows from the table. Those lookups can dominate cost when many rows match.
If the query only needs columns that are already in the index (either as key columns or included columns), the database can answer from the index alone—an index-only plan.
A common surprise: an index can find matches quickly, but fetching the full rows can still be the expensive part.
Why can a “covering” index make a query faster even when the filter column is already indexed?
The key improvement is avoiding many trips from the index into the table to fetch additional columns; if everything you need is in the index, the engine can stop at the index. It’s easy to assume the win is about sorting or join choice, but those depend on the query shape and plan. And covering indexes often store more data (extra columns), not less—so the benefit is fewer lookups, not a smaller index.