Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
bevyengine
GitHub Repository: bevyengine/bevy
Path: blob/main/crates/bevy_ecs/src/entity/hash.rs
6849 views
1
use core::hash::{BuildHasher, Hasher};
2
3
#[cfg(feature = "bevy_reflect")]
4
use bevy_reflect::{std_traits::ReflectDefault, Reflect};
5
6
/// A [`BuildHasher`] that results in a [`EntityHasher`].
7
#[derive(Debug, Default, Clone)]
8
#[cfg_attr(feature = "bevy_reflect", derive(Reflect), reflect(Default, Clone))]
9
pub struct EntityHash;
10
11
impl BuildHasher for EntityHash {
12
type Hasher = EntityHasher;
13
14
fn build_hasher(&self) -> Self::Hasher {
15
Self::Hasher::default()
16
}
17
}
18
19
/// A very fast hash that is only designed to work on generational indices
20
/// like [`Entity`](super::Entity). It will panic if attempting to hash a type containing
21
/// non-u64 fields.
22
///
23
/// This is heavily optimized for typical cases, where you have mostly live
24
/// entities, and works particularly well for contiguous indices.
25
///
26
/// If you have an unusual case -- say all your indices are multiples of 256
27
/// or most of the entities are dead generations -- then you might want also to
28
/// try [`DefaultHasher`](bevy_platform::hash::DefaultHasher) for a slower hash
29
/// computation but fewer lookup conflicts.
30
#[derive(Debug, Default)]
31
pub struct EntityHasher {
32
hash: u64,
33
}
34
35
impl Hasher for EntityHasher {
36
#[inline]
37
fn finish(&self) -> u64 {
38
self.hash
39
}
40
41
fn write(&mut self, _bytes: &[u8]) {
42
panic!("EntityHasher can only hash u64 fields.");
43
}
44
45
#[inline]
46
fn write_u64(&mut self, bits: u64) {
47
// SwissTable (and thus `hashbrown`) cares about two things from the hash:
48
// - H1: low bits (masked by `2ⁿ-1`) to pick the slot in which to store the item
49
// - H2: high 7 bits are used to SIMD optimize hash collision probing
50
// For more see <https://abseil.io/about/design/swisstables#metadata-layout>
51
52
// This hash function assumes that the entity ids are still well-distributed,
53
// so for H1 leaves the entity id alone in the low bits so that id locality
54
// will also give memory locality for things spawned together.
55
// For H2, take advantage of the fact that while multiplication doesn't
56
// spread entropy to the low bits, it's incredibly good at spreading it
57
// upward, which is exactly where we need it the most.
58
59
// While this does include the generation in the output, it doesn't do so
60
// *usefully*. H1 won't care until you have over 3 billion entities in
61
// the table, and H2 won't care until something hits generation 33 million.
62
// Thus the comment suggesting that this is best for live entities,
63
// where there won't be generation conflicts where it would matter.
64
65
// The high 32 bits of this are ⅟φ for Fibonacci hashing. That works
66
// particularly well for hashing for the same reason as described in
67
// <https://extremelearning.com.au/unreasonable-effectiveness-of-quasirandom-sequences/>
68
// It loses no information because it has a modular inverse.
69
// (Specifically, `0x144c_bc89_u32 * 0x9e37_79b9_u32 == 1`.)
70
//
71
// The low 32 bits make that part of the just product a pass-through.
72
const UPPER_PHI: u64 = 0x9e37_79b9_0000_0001;
73
74
// This is `(MAGIC * index + generation) << 32 + index`, in a single instruction.
75
self.hash = bits.wrapping_mul(UPPER_PHI);
76
}
77
}
78
79