Class Store

All Implemented Interfaces:
Closeable, AutoCloseable, org.elasticsearch.core.RefCounted, IndexShardComponent

public class Store extends AbstractIndexShardComponent implements Closeable, org.elasticsearch.core.RefCounted
A Store provides plain access to files written by an elasticsearch index shard. Each shard has a dedicated store that is uses to access Lucene's Directory which represents the lowest level of file abstraction in Lucene used to read and write Lucene indices. This class also provides access to metadata information like checksums for committed files. A committed file is a file that belongs to a segment written by a Lucene commit. Files that have not been committed ie. created during a merge or a shard refresh / NRT reopen are not considered in the MetadataSnapshot.

Note: If you use a store it's reference count should be increased before using it by calling #incRef and a corresponding #decRef must be called in a try/finally block to release the store again ie.:

      store.incRef();
      try {
        // use the store...

      } finally {
          store.decRef();
      }
 
  • Field Details

    • FORCE_RAM_TERM_DICT

      public static final Setting<Boolean> FORCE_RAM_TERM_DICT
      This is an escape hatch for lucenes internal optimization that checks if the IndexInput is an instance of ByteBufferIndexInput and if that's the case doesn't load the term dictionary into ram but loads it off disk iff the fields is not an ID like field. Since this optimization has been added very late in the release processes we add this setting to allow users to opt-out of this by exploiting lucene internals and wrapping the IndexInput in a simple delegate.
    • CORRUPTED_MARKER_NAME_PREFIX

      public static final String CORRUPTED_MARKER_NAME_PREFIX
      See Also:
      Constant Field Values
    • INDEX_STORE_STATS_REFRESH_INTERVAL_SETTING

      public static final Setting<org.elasticsearch.core.TimeValue> INDEX_STORE_STATS_REFRESH_INTERVAL_SETTING
    • READONCE_CHECKSUM

      public static final org.apache.lucene.store.IOContext READONCE_CHECKSUM
      Specific IOContext indicating that we will read only the Lucene file footer (containing the file checksum) See Store.MetadataSnapshot.checksumFromLuceneFile(org.apache.lucene.store.Directory, java.lang.String, java.util.Map<java.lang.String, org.elasticsearch.index.store.StoreFileMetadata>, org.apache.logging.log4j.Logger, java.lang.String, boolean).
  • Constructor Details

  • Method Details

    • directory

      public org.apache.lucene.store.Directory directory()
    • readLastCommittedSegmentsInfo

      public org.apache.lucene.index.SegmentInfos readLastCommittedSegmentsInfo() throws IOException
      Returns the last committed segments info for this store
      Throws:
      IOException - if the index is corrupted or the segments file is not present
    • getMetadata

      public Store.MetadataSnapshot getMetadata(org.apache.lucene.index.IndexCommit commit) throws IOException
      Returns a new MetadataSnapshot for the given commit. If the given commit is null the latest commit point is used. Note that this method requires the caller verify it has the right to access the store and no concurrent file changes are happening. If in doubt, you probably want to use one of the following: readMetadataSnapshot(Path, ShardId, NodeEnvironment.ShardLocker, Logger) to read a meta data while locking IndexShard.snapshotStoreMetadata() to safely read from an existing shard IndexShard.acquireLastIndexCommit(boolean) to get an IndexCommit which is safe to use but has to be freed
      Parameters:
      commit - the index commit to read the snapshot from or null if the latest snapshot should be read from the directory
      Throws:
      org.apache.lucene.index.CorruptIndexException - if the lucene index is corrupted. This can be caused by a checksum mismatch or an unexpected exception when opening the index reading the segments file.
      org.apache.lucene.index.IndexFormatTooOldException - if the lucene index is too old to be opened.
      org.apache.lucene.index.IndexFormatTooNewException - if the lucene index is too new to be opened.
      FileNotFoundException - if one or more files referenced by a commit are not present.
      NoSuchFileException - if one or more files referenced by a commit are not present.
      org.apache.lucene.index.IndexNotFoundException - if the commit point can't be found in this store
      IOException
    • getMetadata

      public Store.MetadataSnapshot getMetadata(org.apache.lucene.index.IndexCommit commit, boolean lockDirectory) throws IOException
      Returns a new MetadataSnapshot for the given commit. If the given commit is null the latest commit point is used. Note that this method requires the caller verify it has the right to access the store and no concurrent file changes are happening. If in doubt, you probably want to use one of the following: readMetadataSnapshot(Path, ShardId, NodeEnvironment.ShardLocker, Logger) to read a meta data while locking IndexShard.snapshotStoreMetadata() to safely read from an existing shard IndexShard.acquireLastIndexCommit(boolean) to get an IndexCommit which is safe to use but has to be freed
      Parameters:
      commit - the index commit to read the snapshot from or null if the latest snapshot should be read from the directory
      lockDirectory - if true the index writer lock will be obtained before reading the snapshot. This should only be used if there is no started shard using this store.
      Throws:
      org.apache.lucene.index.CorruptIndexException - if the lucene index is corrupted. This can be caused by a checksum mismatch or an unexpected exception when opening the index reading the segments file.
      org.apache.lucene.index.IndexFormatTooOldException - if the lucene index is too old to be opened.
      org.apache.lucene.index.IndexFormatTooNewException - if the lucene index is too new to be opened.
      FileNotFoundException - if one or more files referenced by a commit are not present.
      NoSuchFileException - if one or more files referenced by a commit are not present.
      org.apache.lucene.index.IndexNotFoundException - if the commit point can't be found in this store
      IOException
    • renameTempFilesSafe

      public void renameTempFilesSafe(Map<String,String> tempFileMap) throws IOException
      Renames all the given files from the key of the map to the value of the map. All successfully renamed files are removed from the map in-place.
      Throws:
      IOException
    • checkIndex

      public org.apache.lucene.index.CheckIndex.Status checkIndex(PrintStream out) throws IOException
      Checks and returns the status of the existing index in this store.
      Parameters:
      out - where infoStream messages should go. See CheckIndex.setInfoStream(PrintStream)
      Throws:
      IOException
    • stats

      public StoreStats stats(long reservedBytes, LongUnaryOperator localSizeFunction) throws IOException
      Parameters:
      reservedBytes - a prediction of how much larger the store is expected to grow, or StoreStats.UNKNOWN_RESERVED_BYTES.
      localSizeFunction - to calculate the local size of the shard based on the shard size.
      Throws:
      IOException
    • incRef

      public final void incRef()
      Increments the refCount of this Store instance. RefCounts are used to determine when a Store can be closed safely, i.e. as soon as there are no more references. Be sure to always call a corresponding decRef(), in a finally clause; otherwise the store may never be closed. Note that close() simply calls decRef(), which means that the Store will not really be closed until decRef() has been called for all outstanding references.

      Note: Close can safely be called multiple times.

      Specified by:
      incRef in interface org.elasticsearch.core.RefCounted
      Throws:
      org.apache.lucene.store.AlreadyClosedException - iff the reference counter can not be incremented.
      See Also:
      decRef(), tryIncRef()
    • tryIncRef

      public final boolean tryIncRef()
      Tries to increment the refCount of this Store instance. This method will return true iff the refCount was incremented successfully otherwise false. RefCounts are used to determine when a Store can be closed safely, i.e. as soon as there are no more references. Be sure to always call a corresponding decRef(), in a finally clause; otherwise the store may never be closed. Note that close() simply calls decRef(), which means that the Store will not really be closed until decRef() has been called for all outstanding references.

      Note: Close can safely be called multiple times.

      Specified by:
      tryIncRef in interface org.elasticsearch.core.RefCounted
      See Also:
      decRef(), incRef()
    • decRef

      public final boolean decRef()
      Decreases the refCount of this Store instance. If the refCount drops to 0, then this store is closed.
      Specified by:
      decRef in interface org.elasticsearch.core.RefCounted
      See Also:
      incRef()
    • close

      public void close()
      Specified by:
      close in interface AutoCloseable
      Specified by:
      close in interface Closeable
    • isClosing

      public boolean isClosing()
      Returns:
      true if the close() method has been called. This indicates that the current store is either closed or being closed waiting for all references to it to be released. You might prefer to use ensureOpen() instead.
    • readMetadataSnapshot

      public static Store.MetadataSnapshot readMetadataSnapshot(Path indexLocation, ShardId shardId, NodeEnvironment.ShardLocker shardLocker, org.apache.logging.log4j.Logger logger) throws IOException
      Reads a MetadataSnapshot from the given index locations or returns an empty snapshot if it can't be read.
      Throws:
      IOException - if the index we try to read is corrupted
    • tryOpenIndex

      public static void tryOpenIndex(Path indexLocation, ShardId shardId, NodeEnvironment.ShardLocker shardLocker, org.apache.logging.log4j.Logger logger) throws IOException, ShardLockObtainFailedException
      Tries to open an index for the given location. This includes reading the segment infos and possible corruption markers. If the index can not be opened, an exception is thrown
      Throws:
      IOException
      ShardLockObtainFailedException
    • createVerifyingOutput

      public org.apache.lucene.store.IndexOutput createVerifyingOutput(String fileName, StoreFileMetadata metadata, org.apache.lucene.store.IOContext context) throws IOException
      The returned IndexOutput validates the files checksum.

      Note: Checksums are calculated by default since version 4.8.0. This method only adds the verification against the checksum in the given metadata and does not add any significant overhead.

      Throws:
      IOException
    • verify

      public static void verify(org.apache.lucene.store.IndexOutput output) throws IOException
      Throws:
      IOException
    • openVerifyingInput

      public org.apache.lucene.store.IndexInput openVerifyingInput(String filename, org.apache.lucene.store.IOContext context, StoreFileMetadata metadata) throws IOException
      Throws:
      IOException
    • verify

      public static void verify(org.apache.lucene.store.IndexInput input) throws IOException
      Throws:
      IOException
    • checkIntegrityNoException

      public boolean checkIntegrityNoException(StoreFileMetadata md)
    • checkIntegrityNoException

      public static boolean checkIntegrityNoException(StoreFileMetadata md, org.apache.lucene.store.Directory directory)
    • checkIntegrity

      public static void checkIntegrity(StoreFileMetadata md, org.apache.lucene.store.Directory directory) throws IOException
      Throws:
      IOException
    • isMarkedCorrupted

      public boolean isMarkedCorrupted() throws IOException
      Throws:
      IOException
    • removeCorruptionMarker

      public void removeCorruptionMarker() throws IOException
      Deletes all corruption markers from this store.
      Throws:
      IOException
    • failIfCorrupted

      public void failIfCorrupted() throws IOException
      Throws:
      IOException
    • cleanupAndVerify

      public void cleanupAndVerify(String reason, Store.MetadataSnapshot sourceMetadata) throws IOException
      This method deletes every file in this store that is not contained in the given source meta data or is a legacy checksum file. After the delete it pulls the latest metadata snapshot from the store and compares it to the given snapshot. If the snapshots are inconsistent an illegal state exception is thrown.
      Parameters:
      reason - the reason for this cleanup operation logged for each deleted file
      sourceMetadata - the metadata used for cleanup. all files in this metadata should be kept around.
      Throws:
      IOException - if an IOException occurs
      IllegalStateException - if the latest snapshot in this store differs from the given one after the cleanup.
    • refCount

      public int refCount()
      Returns the current reference count.
    • beforeClose

      public void beforeClose()
    • isAutogenerated

      public static boolean isAutogenerated(String name)
      Returns true if the file is auto-generated by the store and shouldn't be deleted during cleanup. This includes write lock and checksum files
    • digestToString

      public static String digestToString(long digest)
      Produces a string representation of the given digest value.
    • deleteQuiet

      public void deleteQuiet(String... files)
    • markStoreCorrupted

      public void markStoreCorrupted(IOException exception) throws IOException
      Marks this store as corrupted. This method writes a corrupted_${uuid} file containing the given exception message. If a store contains a corrupted_${uuid} file isMarkedCorrupted() will return true.
      Throws:
      IOException
    • createEmpty

      public void createEmpty() throws IOException
      creates an empty lucene index and a corresponding empty translog. Any existing data will be deleted.
      Throws:
      IOException
    • bootstrapNewHistory

      public void bootstrapNewHistory() throws IOException
      Marks an existing lucene index with a new history uuid. This is used to make sure no existing shard will recovery from this index using ops based recovery.
      Throws:
      IOException
    • bootstrapNewHistory

      public void bootstrapNewHistory(long localCheckpoint, long maxSeqNo) throws IOException
      Marks an existing lucene index with a new history uuid and sets the given local checkpoint as well as the maximum sequence number. This is used to make sure no existing shard will recover from this index using ops based recovery.
      Throws:
      IOException
      See Also:
      SequenceNumbers.LOCAL_CHECKPOINT_KEY, SequenceNumbers.MAX_SEQ_NO
    • associateIndexWithNewTranslog

      public void associateIndexWithNewTranslog(String translogUUID) throws IOException
      Force bakes the given translog generation as recovery information in the lucene index. This is used when recovering from a snapshot or peer file based recovery where a new empty translog is created and the existing lucene index needs should be changed to use it.
      Throws:
      IOException
    • ensureIndexHasHistoryUUID

      public void ensureIndexHasHistoryUUID() throws IOException
      Checks that the Lucene index contains a history uuid marker. If not, a new one is generated and committed.
      Throws:
      IOException
    • trimUnsafeCommits

      public void trimUnsafeCommits(long lastSyncedGlobalCheckpoint, long minRetainedTranslogGen, Version indexVersionCreated) throws IOException
      Keeping existing unsafe commits when opening an engine can be problematic because these commits are not safe at the recovering time but they can suddenly become safe in the future. The following issues can happen if unsafe commits are kept oninit.

      1. Replica can use unsafe commit in peer-recovery. This happens when a replica with a safe commit c1(max_seqno=1) and an unsafe commit c2(max_seqno=2) recovers from a primary with c1(max_seqno=1). If a new document(seqno=2) is added without flushing, the global checkpoint is advanced to 2; and the replica recovers again, it will use the unsafe commit c2(max_seqno=2 at most gcp=2) as the starting commit for sequenced-based recovery even the commit c2 contains a stale operation and the document(with seqno=2) will not be replicated to the replica.

      2. Min translog gen for recovery can go backwards in peer-recovery. This happens when are replica with a safe commit c1(local_checkpoint=1, recovery_translog_gen=1) and an unsafe commit c2(local_checkpoint=2, recovery_translog_gen=2). The replica recovers from a primary, and keeps c2 as the last commit, then sets last_translog_gen to 2. Flushing a new commit on the replica will cause exception as the new last commit c3 will have recovery_translog_gen=1. The recovery translog generation of a commit is calculated based on the current local checkpoint. The local checkpoint of c3 is 1 while the local checkpoint of c2 is 2.

      3. Commit without translog can be used in recovery. An old index, which was created before multiple-commits is introduced (v6.2), may not have a safe commit. If that index has a snapshotted commit without translog and an unsafe commit, the policy can consider the snapshotted commit as a safe commit for recovery even the commit does not have translog.

      Throws:
      IOException
    • findSafeIndexCommit

      public Optional<SequenceNumbers.CommitInfo> findSafeIndexCommit(long globalCheckpoint) throws IOException
      Returns a SequenceNumbers.CommitInfo of the safe commit if exists.
      Throws:
      IOException