Struct fdb_rl::raw_record::RawRecord
source · pub struct RawRecord {
primary_key: RawRecordPrimaryKey,
version: RecordVersion,
record_bytes: Bytes,
}
Expand description
A wrapper around all information that can be determined about a record before serializing and deserializing it.
Primary key schema constraint
In Java Record Layer, by default all record types within a record
store are interleaved within the same record extent. This behavior
can be changed using RecordTypeKeyExpression
Java class, which
indicates that record type identifier should be contained at the
start of the primary key, thereby partitioning the record extent
by record type.
In our implementation, we require that all record types within
a record store have the same RawRecordPrimaryKeySchema
. While
it is not handled by RawRecord
type, our record extent will be
also be partitioned by record type and that information would be
contained at the beginning of RawRecordPrimaryKeySchema
.
Additionally, using RawRecordPrimaryKeySchema
constraints the
flexibility of primary key schema for record types within a record
store. It will require all record types to have the same primary
key schema.
There are two workarounds possible here. One is to setup a unique secondary index for a particular field of the record type, effectively minicking primary key behavior. Another is to use a different record store altogether.
The motivation for choosing this approach is two fold.
Firstly, it avoids edge cases with split_helper
where
integer values are a part of a primary key tuple.
Assume that we allowed record types to have multiple primary key schemas.
Suppose we have two record types with primary key schemas of
(int, )
and (int, int,)
. Given the way split_helper
works,
their split suffixes (-1
, 0
, etc.,) would overlap. Now if we
had to delete a record with primary key of (1, )
we simply
cannot issue a clear range on prefix (1, )
without verifying if
key of the form (1, ..., )
exists. If any key of the form (1, ..., )
exists, then deleting key (1, )
would accidentally
delete that key too.
Hence, we do not permit record types with multiple primary key schemas to avoid this class of problems.
Secondly, the RawRecordCursor
implementation is aware of
RawRecordPrimaryKeySchema
. This means any RawRecord
value
returned by the cursor will always be well formed and any errors
can be identified at the lowest level of abstraction.
Warning: This type is not meant to be public. We need to make this type public to support integration tests. Do not use this type in your code.
Fields§
§primary_key: RawRecordPrimaryKey
§version: RecordVersion
§record_bytes: Bytes
Implementations§
source§impl RawRecord
impl RawRecord
sourcepub fn into_parts(self) -> (RawRecordPrimaryKey, RecordVersion, Bytes)
pub fn into_parts(self) -> (RawRecordPrimaryKey, RecordVersion, Bytes)
Extract primary key, record version and record bytes from
RawRecord
.
Trait Implementations§
source§impl Cursor<RawRecord> for RawRecordCursor
impl Cursor<RawRecord> for RawRecordCursor
source§async fn next(&mut self) -> CursorResult<RawRecord>
async fn next(&mut self) -> CursorResult<RawRecord>
Return the next RawRecord
.
In regular state machines, where transitions are represented
using event [guard] / action
and we directly send the event
to the state machine.
However, in this case, all side effect of reading from the database is managed by the driver loop (below) and we only use the state machine to manage state data.
When we are in a state where we can return data (or error), we
exit the loop and return the data. This is managed by
returning a Some(_: CursorResult<RawRecord>)
value.