1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911
//! Helper module for splitting records across multiple key-value pairs.
//!
//! FoundationDB has key-value size [limitation] of 10KB and 100KB
//! respectively. While we do not explicitly check the size of our
//! keys, this module provides functions that when required will split
//! serialized record into multiple 100KB chunks and spreads them
//! across multiple key-value pairs.
//!
//! This is done by adding a suffix to the key. The keyspace for a
//! record is organized as shown below.
//!
//! ```
//! |------------------------------+---------------------|
//! | Key | Value |
//! |------------------------------+---------------------|
//! | (Subspace, Primary Key, -1,) | Record header |
//! |------------------------------+---------------------|
//! | (Subspace, Primary Key, 0,) | 100KB chunk or less |
//! |------------------------------+---------------------|
//! | (Subspace, Primary Key, 1,) | 100KB chunk or less |
//! |------------------------------+---------------------|
//! | (Subspace, Primary Key, 2,) | 100KB chunk or less |
//! |------------------------------+---------------------|
//! | ... | ... |
//! | ... | ... |
//! |------------------------------+---------------------|
//! ```
//!
//! There is a record header at suffix of `-1`. The record header is a
//! tuple of the form:
//!
//! ```
//! (header_version, data_splits, incarnation, versionstamp)
//! ```
//!
//! * The `header_version` is a number describing the version of the
//! header. The form described above is `header_version` of `0`.
//!
//! * `data_splits` is the number of splits of data that is contained
//! within this record. At a minimum, a record will have atleast one
//! data split. This is true *even* in case of an empty record.
//!
//! * `incarnation` is the incarnation of the record. Incarnation is
//! managed using [`RecordContext`] and is incremented each time a
//! record is migrated between FoundationDB clusters.
//!
//! * The `versionstamp` contains information about
//! [`RecordVersion`]'s global version and local version.
//!
//! There is *no* tuple encoding for the data. That is *100KB chunk or
//! less* value is stored in the raw format. This is because tuple
//! encoding would introduce escape sequences which depending on the
//! data might exceed the 100KB limit.
//!
//! **Note:** In the key that gets generated, we flatten the primary
//! key tuple when suffix is appended to it. The primary key is not a
//! *nested* tuple.
//!
//! A minimal record will be of the form
//!
//! ```
//! |------------------------------+---------------|
//! | Key | Value |
//! |------------------------------+---------------|
//! | (Subspace, Primary Key, -1,) | Record header |
//! |------------------------------+---------------|
//! | (Subspace, Primary Key, 0,) | "" |
//! |------------------------------+---------------|
//! ```
//!
//! The functions [`load`], [`save`] and [`delete`] does not take a
//! value of [`RawRecordPrimaryKeySchema`] as its
//! argument. **However** its implementation assumes that you are
//! adhering to *primary key schema constraint* as mentioned in the
//! documentation for [`RawRecord`].
//!
//! <p style="background:rgba(255,181,77,0.16);padding:0.75em;">
//! <strong>Warning:</strong> Functions in this module are
//! <strong>not</strong> meant to be public. We need to make functions
//! in this module public to support integration tests. Do not use
//! functions in this module in your code.</p>
//!
//! [limitation]: https://apple.github.io/foundationdb/known-limitations.html#large-keys-and-values
//! [`RecordContext`]: crate::RecordContext
//!
//! [`RawRecordPrimaryKeySchema`]: crate::raw_record::primary_key::RawRecordPrimaryKeySchema
//!
//! [`RawRecord`]: crate::raw_record::RawRecord
//
// In the design, we use atleast one data split even in case of an
// empty record because it helps us model our `RawRecordCursor` more
// easily. This is because when doing forward *or* reverse scan we can
// easily determine the number of key-values to read before expecting
// an record header.
pub(crate) mod error;
use bytes::{Buf, BufMut, Bytes, BytesMut};
use num_bigint::BigInt;
use fdb::error::{FdbError, FdbResult};
use fdb::range::{Range, StreamingMode};
use fdb::subspace::Subspace;
use fdb::transaction::{MutationType, ReadTransaction, Transaction};
use fdb::tuple::{Null, Tuple, Versionstamp};
use fdb::{Key, Value};
use std::convert::{TryFrom, TryInto};
use std::ops::ControlFlow;
use crate::cursor::{CursorError, KeyValueCursorBuilder, NoNextReason};
use crate::range::TupleRange;
use crate::scan::{ScanLimiter, ScanPropertiesBuilder};
use crate::RecordVersion;
use error::{
SPLIT_HELPER_INVALID_PRIMARY_KEY, SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER,
SPLIT_HELPER_LOAD_INVALID_SERIALIZED_BYTES, SPLIT_HELPER_SAVE_INVALID_SERIALIZED_BYTES_SIZE,
SPLIT_HELPER_SCAN_LIMIT_REACHED,
};
/// If a record is greater than this size (in bytes) it will be split
/// into multiple key-value pairs.
const SPLIT_RECORD_SIZE: usize = 100_000;
/// Record header version `0`.
///
/// The [`RawRecord`] type cursor implementation uses this type. If we
/// decide to transition to newer record headers versions, we will
/// need to make corresponding changes to [`RawRecord`].
///
/// [`RawRecord`]: crate::raw_record::RawRecord;
#[derive(Debug, PartialEq)]
pub(crate) struct RecordHeaderV0 {
header_version: i8,
data_splits: i8,
incarnation: Option<u64>,
versionstamp: Versionstamp,
}
impl RecordHeaderV0 {
fn new(
data_splits: i8,
incarnation: Option<u64>,
versionstamp: Versionstamp,
) -> RecordHeaderV0 {
RecordHeaderV0 {
header_version: 0,
data_splits,
incarnation,
versionstamp,
}
}
fn save<Tr>(
self,
tr: &Tr,
maybe_subspace: &Option<Subspace>,
primary_key: &Tuple,
complete: bool,
) -> FdbResult<()>
where
Tr: Transaction,
{
let RecordHeaderV0 {
header_version,
data_splits,
incarnation,
versionstamp,
} = self;
// (subspace, primary_key, -1)
let record_version_key = Key::from({
let key_tup = {
// (primary_key, -1)
let mut t = primary_key.clone();
t.push_back::<i8>(-1);
t
};
maybe_subspace
.as_ref()
.map(|s| s.subspace(&key_tup).pack())
.unwrap_or_else(|| key_tup.pack())
});
let record_version_value_tuple = {
let tup: (i8, i8, Option<u64>, Versionstamp) =
(header_version, data_splits, incarnation, versionstamp);
let mut t = Tuple::new();
// header_version
t.push_back::<i8>(tup.0);
// data_splits
t.push_back::<i8>(tup.1);
// incarnation
if let Some(i) = tup.2 {
t.push_back::<BigInt>(i.into());
} else {
t.push_back::<Null>(Null);
}
// versionstamp
t.push_back::<Versionstamp>(tup.3);
t
};
if complete {
let record_version_value = Value::from(record_version_value_tuple.pack());
tr.set(record_version_key, record_version_value);
} else {
// We need a value of `Bytes` type here, because of how the
// API is designed.
let record_version_value =
record_version_value_tuple.pack_with_versionstamp(Bytes::new())?;
unsafe {
tr.mutate(
MutationType::SetVersionstampedValue,
record_version_key,
record_version_value,
);
}
}
Ok(())
}
pub(crate) fn into_parts(self) -> (i8, RecordVersion) {
let RecordHeaderV0 {
data_splits,
incarnation,
versionstamp,
..
} = self;
let record_version = if let Some(i) = incarnation {
RecordVersion::from((i, versionstamp))
} else {
RecordVersion::from(versionstamp)
};
(data_splits, record_version)
}
}
impl TryFrom<Value> for RecordHeaderV0 {
type Error = FdbError;
fn try_from(value: Value) -> FdbResult<RecordHeaderV0> {
Tuple::try_from(value)
.and_then(|mut tup| {
// Ensure that the first tuple element is
// `0`. Otherwise we have a invalid version.
tup.pop_front::<i8>()
.and_then(|x| if x == 0 { Some(tup) } else { None })
.ok_or_else(|| FdbError::new(SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER))
})
.and_then(|mut tup| {
// `i8` is good enough to hold values upto 100. 100KB
// * 100 = 10MB.
tup.pop_front::<i8>()
.and_then(|s| {
// There should be at-least one data split for
// a record. Otherwise it is an error.
if (1..=100).contains(&s) {
Some((s, tup))
} else {
None
}
})
.ok_or_else(|| FdbError::new(SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER))
})
.and_then(|(data_splits, mut tup)| {
// Extract incarnation
let maybe_incarnation_version = if let Some(x) = tup.pop_front::<BigInt>() {
Some(
u64::try_from(x)
.map_err(|_| FdbError::new(SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER))?,
)
} else if let Some(Null) = tup.pop_front::<Null>() {
None
} else {
return Err(FdbError::new(SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER));
};
Ok((data_splits, maybe_incarnation_version, tup))
})
.and_then(|(data_splits, incarnation, mut tup)| {
let versionstamp = tup
.pop_front::<Versionstamp>()
.ok_or_else(|| FdbError::new(SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER))?;
Ok(RecordHeaderV0::new(data_splits, incarnation, versionstamp))
})
.map_err(|_| FdbError::new(SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER))
}
}
/// Delete the serialized representation of a record **without** any
/// safety checks.
///
/// ### Note
///
/// You will *never* want to use this function. Any mistake with the
/// `primary_key` can seriously damage the database, as it will issue
/// a [`Transaction::clear_range`] **without** any checks.
///
/// The *only* place where this is useful is in the [`delete`] and
/// [`save`] methods, where we have already checked the validity of
/// `primary_key`.
unsafe fn delete_unchecked<Tr>(
tr: &Tr,
maybe_subspace: &Option<Subspace>,
primary_key: &Tuple,
) -> FdbResult<()>
where
Tr: Transaction,
{
// This function does not really fail. Because we are converting
// `TupleRange -> KeyRange -> Range` and due to
// `Range::try_from(_: KeyRange)`, we get value of `FdbResult`
// type.
tr.clear_range(Range::try_from(
TupleRange::all_of(primary_key).into_key_range(maybe_subspace),
)?);
Ok(())
}
/// Delete the serialized representation of a record.
///
/// <p style="background:rgba(255,181,77,0.16);padding:0.75em;">
/// <strong>Warning:</strong> This function is <strong>not</strong>
/// meant to be public. We need to make this function public to
/// support integration tests. Do not use this function in your
/// code.</p>
pub async fn delete<Tr>(
tr: &Tr,
maybe_scan_limiter: &Option<ScanLimiter>,
maybe_subspace: &Option<Subspace>,
primary_key: &Tuple,
) -> FdbResult<()>
where
Tr: Transaction,
{
// Ensure that `primary_key` is valid. We check this by attempting
// to load the record. When we get either no record or a valid
// record, then we proceed to delete. Otherwise, we return an
// error.
load(tr, maybe_scan_limiter, maybe_subspace, primary_key)
.await
.map_err(|e| {
// `load` function specific errors are:
// - `SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER`
// - `SPLIT_HELPER_LOAD_INVALID_SERIALIZED_BYTES`
//
// If we see that then convert it to
// `SPLIT_HELPER_INVALID_PRIMARY_KEY`, which is a more
// general error.
let error_code = e.code();
if error_code == SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER
|| error_code == SPLIT_HELPER_LOAD_INVALID_SERIALIZED_BYTES
{
FdbError::new(SPLIT_HELPER_INVALID_PRIMARY_KEY)
} else {
e
}
})
.and_then(|_| {
// Safety: If we are here, then it means that
// `primary_key` has either a valid record *or* is
// empty. We can safely issue a
// `delete_unchecked`.
unsafe { delete_unchecked(tr, maybe_subspace, primary_key) }
})
}
/// Save serialized representation using multiple keys if necessary.
///
/// ### Note
///
/// If this function returns an error, then in the *context* of the
/// transaction, any previously stored data *will be* deleted.
///
/// This is *only* in the context of transaction. It will not be
/// deleted from the database till the transaction is committed. If
/// you do not want the data to be deleted, you should not commit the
/// transaction.
///
/// If you want to have the data in the event of an error, you must
/// [`load`] it, before calling [`save`].
///
/// <p style="background:rgba(255,181,77,0.16);padding:0.75em;">
/// <strong>Warning:</strong> This function is <strong>not</strong>
/// meant to be public. We need to make this function public to
/// support integration tests. Do not use this function in your
/// code.</p>
pub async fn save<Tr>(
tr: &Tr,
maybe_scan_limiter: &Option<ScanLimiter>,
maybe_subspace: &Option<Subspace>,
primary_key: &Tuple,
serialized: Bytes,
record_version: RecordVersion,
) -> FdbResult<()>
where
Tr: Transaction,
{
// *Note:* If this function returns an error, then you *must*
// assume that the primary key is in an inconsistent
// state.
fn save_inner<Tr>(
tr: &Tr,
maybe_subspace: &Option<Subspace>,
primary_key: &Tuple,
mut serialized: Bytes,
record_version: RecordVersion,
) -> FdbResult<()>
where
Tr: Transaction,
{
// While FoundationDB has 10MB limit [1] for mutations
// (including for key-values that is written), we do not track
// that information for the *entire transaction*. For the
// *entire transaction* the only what this would get surfaced
// is via `transaction_too_large (2101)` [2] error at the time
// of committing.
//
// We do track when the serialized bytes is greater than 10MB
// and return an error if that is the case.
//
// [1]: https://apple.github.io/foundationdb/known-limitations.html#large-transactions
// [2]: https://apple.github.io/foundationdb/api-error-codes.html
// Return error if serialized bytes is greater than 10MB.
if serialized.len() > (100 * SPLIT_RECORD_SIZE) {
return Err(FdbError::new(
SPLIT_HELPER_SAVE_INVALID_SERIALIZED_BYTES_SIZE,
));
}
let mut suffix = 0;
loop {
let value = Value::from(if serialized.len() < SPLIT_RECORD_SIZE {
serialized.copy_to_bytes(serialized.len())
} else {
serialized.copy_to_bytes(SPLIT_RECORD_SIZE)
});
let key_tup = {
// (primary_key, suffix)
let mut t = primary_key.clone();
// `i8` is good enough to hold values upto 100. 100KB *
// 100 = 10MB.
t.push_back::<i8>(suffix);
t
};
let key = Key::from(
maybe_subspace
.as_ref()
.map(|s| s.subspace(&key_tup).pack())
.unwrap_or_else(|| key_tup.pack()),
);
tr.set(key, value);
// There is no risk of overflow here because we are
// checking at the beginning of this function to make sure
// we return an error in case serialized bytes is greater
// than 10MB.
suffix += 1;
if serialized.len() == 0 {
break;
}
}
// `suffix` is incremented by 1 before exiting from the loop.
let data_splits = suffix;
let (incarnation, global_version, local_version, complete) = record_version.into_parts();
let versionstamp = Versionstamp::try_from((global_version, local_version))?;
let record_header_v0 = RecordHeaderV0::new(data_splits, incarnation, versionstamp);
record_header_v0.save(tr, maybe_subspace, primary_key, complete)?;
Ok(())
}
delete(tr, maybe_scan_limiter, maybe_subspace, primary_key).await?;
let res = save_inner(tr, maybe_subspace, primary_key, serialized, record_version);
res.map_err(|e| {
// Safety: We are checking validity of the `primary_key` in
// the call to `delete` above.
unsafe {
let _ = delete_unchecked(tr, maybe_subspace, primary_key);
}
e
})
}
/// Load serialized byte array that may be split among several keys.
///
/// When a value of `Ok(None)` is returned, it means that there is no
/// serialized byte array associated with the `primary_key`. If there
/// is a serialized byte array associated with the `primary_key`, then
/// we would return `Ok(Some((record_version, seralized_bytes)))`
/// value. Otherwise, an `Err` value is returned.
///
/// <p style="background:rgba(255,181,77,0.16);padding:0.75em;">
/// <strong>Warning:</strong> This function is <strong>not</strong>
/// meant to be public. We need to make this function public to
/// support integration tests. Do not use this function in your
/// code.</p>
pub async fn load<Tr>(
tr: &Tr,
maybe_scan_limiter: &Option<ScanLimiter>,
maybe_subspace: &Option<Subspace>,
primary_key: &Tuple,
) -> FdbResult<Option<(RecordVersion, Bytes)>>
where
Tr: ReadTransaction,
{
let kv_cursor = {
let scan_properties = {
let mut scan_properites_builder = ScanPropertiesBuilder::default();
unsafe {
scan_properites_builder.set_range_options(|range_options| {
range_options.set_mode(StreamingMode::WantAll)
});
}
if let Some(scan_limiter_ref) = maybe_scan_limiter.as_ref() {
scan_properites_builder.set_scan_limiter(scan_limiter_ref.clone());
}
scan_properites_builder.build()
};
let subspace = if let Some(subspace_ref) = maybe_subspace.as_ref() {
subspace_ref.clone().subspace(primary_key)
} else {
Subspace::new(Bytes::new()).subspace(primary_key)
};
let mut kv_cursor_builder = KeyValueCursorBuilder::new();
kv_cursor_builder
.subspace(subspace)
.key_range(TupleRange::all().into_key_range(&None))
.scan_properties(scan_properties);
kv_cursor_builder.build(tr)
}?;
let (mut kv_btree, err) = kv_cursor.into_btreemap().await;
match err {
// `NoNextReason::SourceExhausted` is the condition that we
// need as it indicates that we have read the entire key-value
// cursor.
CursorError::NoNextReason(NoNextReason::SourceExhausted(_)) => {
if kv_btree.len() == 0 {
Ok(None)
} else {
// Extract Record header key-value pair.
//
// Note: Here we are assuming the first key in the
// BTreeMap is Tuple `(-1,)`. If we change this
// structure in the future, then the logic below
// must be rewritten.
//
// Safety: Safe to unwrap here because we are checking
// `len == 0` above.
let (k, v) = kv_btree.pop_first().unwrap();
let record_header_key = Key::from(
{
let tup: (i8,) = (-1,);
let mut t = Tuple::new();
t.push_back::<i8>(tup.0);
t
}
.pack(),
);
if record_header_key == k {
// Currently we have only one version of record header.
let record_header_v0 = RecordHeaderV0::try_from(v)?;
let RecordHeaderV0 {
data_splits,
incarnation,
versionstamp,
..
} = record_header_v0;
// Ensure that data splits mentioned in the record
// header is consistent with the number of key
// values in record data.
if data_splits
!= kv_btree.len().try_into().map_err(|_| {
FdbError::new(SPLIT_HELPER_LOAD_INVALID_SERIALIZED_BYTES)
})?
{
return Err(FdbError::new(SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER));
}
let record_version = match incarnation {
Some(incarnation_version) => {
RecordVersion::from((incarnation_version, versionstamp))
}
None => RecordVersion::from(versionstamp),
};
let serialized_bytes =
match (0..kv_btree.len()).try_fold(BytesMut::new(), |mut acc, x| {
let key = Key::from(
{
let tup: (BigInt,) = (x.into(),);
let mut t = Tuple::new();
t.push_back::<BigInt>(tup.0);
t
}
.pack(),
);
match kv_btree.remove(&key) {
Some(value) => ControlFlow::Continue({
acc.put(Bytes::from(value));
acc
}),
None => ControlFlow::Break(FdbError::new(
SPLIT_HELPER_LOAD_INVALID_SERIALIZED_BYTES,
)),
}
}) {
ControlFlow::Continue(bytes_mut) => Ok(Bytes::from(bytes_mut)),
ControlFlow::Break(err) => Err(err),
}?;
Ok(Some((record_version, serialized_bytes)))
} else {
// There is no `(Subspace, Primary Key Tuple, -1)`
// key *and* the range is not empty. Therefore it
// is an error.
Err(FdbError::new(SPLIT_HELPER_INVALID_PRIMARY_KEY))
}
}
}
CursorError::NoNextReason(_) => Err(FdbError::new(SPLIT_HELPER_SCAN_LIMIT_REACHED)),
CursorError::FdbError(err, _) => Err(err),
}
}
#[cfg(test)]
mod tests {
mod record_header_v0 {
use bytes::Bytes;
use num_bigint::BigInt;
use fdb::error::FdbError;
use fdb::tuple::{Null, Tuple, Versionstamp};
use fdb::Value;
use std::convert::TryFrom;
use super::super::{error::SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER, RecordHeaderV0};
#[test]
fn try_from_value_try_from() {
// invalid header
{
// `header_version` must be `0`.
{
let value = Value::from(
{
let header_version = 1;
let data_splits = 1;
let incarnation = None;
let versionstamp = Versionstamp::complete(
Bytes::from_static(b"\xAA\xBB\xCC\xDD\xEE\xFF\x00\x01\x02\x03"),
0,
);
let tup: (i8, i8, Option<u64>, Versionstamp) =
(header_version, data_splits, incarnation, versionstamp);
let mut t = Tuple::new();
// header_version
t.push_back::<i8>(tup.0);
// data_splits
t.push_back::<i8>(tup.1);
// incarnation
if let Some(i) = tup.2 {
t.push_back::<BigInt>(i.into());
} else {
t.push_back::<Null>(Null);
}
// versionstamp
t.push_back::<Versionstamp>(tup.3);
t
}
.pack(),
);
assert_eq!(
RecordHeaderV0::try_from(value),
Err(FdbError::new(SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER))
);
}
// `data_splits` must be between 1 and 100
{
let value = Value::from(
{
let header_version = 0;
let data_splits = 0;
let incarnation = None;
let versionstamp = Versionstamp::complete(
Bytes::from_static(b"\xAA\xBB\xCC\xDD\xEE\xFF\x00\x01\x02\x03"),
0,
);
let tup: (i8, i8, Option<u64>, Versionstamp) =
(header_version, data_splits, incarnation, versionstamp);
let mut t = Tuple::new();
// header_version
t.push_back::<i8>(tup.0);
// data_splits
t.push_back::<i8>(tup.1);
// incarnation
if let Some(i) = tup.2 {
t.push_back::<BigInt>(i.into());
} else {
t.push_back::<Null>(Null);
}
// versionstamp
t.push_back::<Versionstamp>(tup.3);
t
}
.pack(),
);
assert_eq!(
RecordHeaderV0::try_from(value),
Err(FdbError::new(SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER))
);
}
// `incarnation` must be an `Option<i64>` and not bool.
{
let value = Value::from(
{
let header_version = 0;
// Must be between 1 and 100
let data_splits = 1;
let incarnation = false;
let versionstamp = Versionstamp::complete(
Bytes::from_static(b"\xAA\xBB\xCC\xDD\xEE\xFF\x00\x01\x02\x03"),
0,
);
let tup: (i8, i8, bool, Versionstamp) =
(header_version, data_splits, incarnation, versionstamp);
let mut t = Tuple::new();
// header_version
t.push_back::<i8>(tup.0);
// data_splits
t.push_back::<i8>(tup.1);
// incarnation
t.push_back::<bool>(true);
// versionstamp
t.push_back::<Versionstamp>(tup.3);
t
}
.pack(),
);
assert_eq!(
RecordHeaderV0::try_from(value),
Err(FdbError::new(SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER))
);
}
// `versionstamp` cannot be of `Null` type.
{
let value = Value::from(
{
let header_version = 0;
let data_splits = 1;
let incarnation = None;
let versionstamp = Null;
let tup: (i8, i8, Option<u64>, Null) =
(header_version, data_splits, incarnation, versionstamp);
let mut t = Tuple::new();
// header_version
t.push_back::<i8>(tup.0);
// data_splits
t.push_back::<i8>(tup.1);
// incarnation
if let Some(i) = tup.2 {
t.push_back::<BigInt>(i.into());
} else {
t.push_back::<Null>(Null);
}
// versionstamp
t.push_back::<Null>(tup.3);
t
}
.pack(),
);
assert_eq!(
RecordHeaderV0::try_from(value),
Err(FdbError::new(SPLIT_HELPER_LOAD_INVALID_RECORD_HEADER))
);
}
}
// valid header.
{
let value = Value::from(
{
let header_version = 0;
let data_splits = 1;
let incarnation = None;
let versionstamp = Versionstamp::complete(
Bytes::from_static(b"\xAA\xBB\xCC\xDD\xEE\xFF\x00\x01\x02\x03"),
0,
);
let tup: (i8, i8, Option<u64>, Versionstamp) =
(header_version, data_splits, incarnation, versionstamp);
let mut t = Tuple::new();
// header_version
t.push_back::<i8>(tup.0);
// data_splits
t.push_back::<i8>(tup.1);
// incarnation
if let Some(i) = tup.2 {
t.push_back::<BigInt>(i.into());
} else {
t.push_back::<Null>(Null);
}
// versionstamp
t.push_back::<Versionstamp>(tup.3);
t
}
.pack(),
);
assert_eq!(
RecordHeaderV0::try_from(value),
Ok(RecordHeaderV0::new(
1,
None,
Versionstamp::complete(
Bytes::from_static(b"\xAA\xBB\xCC\xDD\xEE\xFF\x00\x01\x02\x03"),
0,
)
))
);
}
}
}
}