Path: blob/master/src/java.base/share/classes/java/util/HashMap.java
41152 views
/*1* Copyright (c) 1997, 2019, Oracle and/or its affiliates. All rights reserved.2* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.3*4* This code is free software; you can redistribute it and/or modify it5* under the terms of the GNU General Public License version 2 only, as6* published by the Free Software Foundation. Oracle designates this7* particular file as subject to the "Classpath" exception as provided8* by Oracle in the LICENSE file that accompanied this code.9*10* This code is distributed in the hope that it will be useful, but WITHOUT11* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or12* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License13* version 2 for more details (a copy is included in the LICENSE file that14* accompanied this code).15*16* You should have received a copy of the GNU General Public License version17* 2 along with this work; if not, write to the Free Software Foundation,18* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.19*20* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA21* or visit www.oracle.com if you need additional information or have any22* questions.23*/2425package java.util;2627import java.io.IOException;28import java.io.InvalidObjectException;29import java.io.Serializable;30import java.lang.reflect.ParameterizedType;31import java.lang.reflect.Type;32import java.util.function.BiConsumer;33import java.util.function.BiFunction;34import java.util.function.Consumer;35import java.util.function.Function;36import jdk.internal.access.SharedSecrets;3738/**39* Hash table based implementation of the {@code Map} interface. This40* implementation provides all of the optional map operations, and permits41* {@code null} values and the {@code null} key. (The {@code HashMap}42* class is roughly equivalent to {@code Hashtable}, except that it is43* unsynchronized and permits nulls.) This class makes no guarantees as to44* the order of the map; in particular, it does not guarantee that the order45* will remain constant over time.46*47* <p>This implementation provides constant-time performance for the basic48* operations ({@code get} and {@code put}), assuming the hash function49* disperses the elements properly among the buckets. Iteration over50* collection views requires time proportional to the "capacity" of the51* {@code HashMap} instance (the number of buckets) plus its size (the number52* of key-value mappings). Thus, it's very important not to set the initial53* capacity too high (or the load factor too low) if iteration performance is54* important.55*56* <p>An instance of {@code HashMap} has two parameters that affect its57* performance: <i>initial capacity</i> and <i>load factor</i>. The58* <i>capacity</i> is the number of buckets in the hash table, and the initial59* capacity is simply the capacity at the time the hash table is created. The60* <i>load factor</i> is a measure of how full the hash table is allowed to61* get before its capacity is automatically increased. When the number of62* entries in the hash table exceeds the product of the load factor and the63* current capacity, the hash table is <i>rehashed</i> (that is, internal data64* structures are rebuilt) so that the hash table has approximately twice the65* number of buckets.66*67* <p>As a general rule, the default load factor (.75) offers a good68* tradeoff between time and space costs. Higher values decrease the69* space overhead but increase the lookup cost (reflected in most of70* the operations of the {@code HashMap} class, including71* {@code get} and {@code put}). The expected number of entries in72* the map and its load factor should be taken into account when73* setting its initial capacity, so as to minimize the number of74* rehash operations. If the initial capacity is greater than the75* maximum number of entries divided by the load factor, no rehash76* operations will ever occur.77*78* <p>If many mappings are to be stored in a {@code HashMap}79* instance, creating it with a sufficiently large capacity will allow80* the mappings to be stored more efficiently than letting it perform81* automatic rehashing as needed to grow the table. Note that using82* many keys with the same {@code hashCode()} is a sure way to slow83* down performance of any hash table. To ameliorate impact, when keys84* are {@link Comparable}, this class may use comparison order among85* keys to help break ties.86*87* <p><strong>Note that this implementation is not synchronized.</strong>88* If multiple threads access a hash map concurrently, and at least one of89* the threads modifies the map structurally, it <i>must</i> be90* synchronized externally. (A structural modification is any operation91* that adds or deletes one or more mappings; merely changing the value92* associated with a key that an instance already contains is not a93* structural modification.) This is typically accomplished by94* synchronizing on some object that naturally encapsulates the map.95*96* If no such object exists, the map should be "wrapped" using the97* {@link Collections#synchronizedMap Collections.synchronizedMap}98* method. This is best done at creation time, to prevent accidental99* unsynchronized access to the map:<pre>100* Map m = Collections.synchronizedMap(new HashMap(...));</pre>101*102* <p>The iterators returned by all of this class's "collection view methods"103* are <i>fail-fast</i>: if the map is structurally modified at any time after104* the iterator is created, in any way except through the iterator's own105* {@code remove} method, the iterator will throw a106* {@link ConcurrentModificationException}. Thus, in the face of concurrent107* modification, the iterator fails quickly and cleanly, rather than risking108* arbitrary, non-deterministic behavior at an undetermined time in the109* future.110*111* <p>Note that the fail-fast behavior of an iterator cannot be guaranteed112* as it is, generally speaking, impossible to make any hard guarantees in the113* presence of unsynchronized concurrent modification. Fail-fast iterators114* throw {@code ConcurrentModificationException} on a best-effort basis.115* Therefore, it would be wrong to write a program that depended on this116* exception for its correctness: <i>the fail-fast behavior of iterators117* should be used only to detect bugs.</i>118*119* <p>This class is a member of the120* <a href="{@docRoot}/java.base/java/util/package-summary.html#CollectionsFramework">121* Java Collections Framework</a>.122*123* @param <K> the type of keys maintained by this map124* @param <V> the type of mapped values125*126* @author Doug Lea127* @author Josh Bloch128* @author Arthur van Hoff129* @author Neal Gafter130* @see Object#hashCode()131* @see Collection132* @see Map133* @see TreeMap134* @see Hashtable135* @since 1.2136*/137public class HashMap<K,V> extends AbstractMap<K,V>138implements Map<K,V>, Cloneable, Serializable {139140@java.io.Serial141private static final long serialVersionUID = 362498820763181265L;142143/*144* Implementation notes.145*146* This map usually acts as a binned (bucketed) hash table, but147* when bins get too large, they are transformed into bins of148* TreeNodes, each structured similarly to those in149* java.util.TreeMap. Most methods try to use normal bins, but150* relay to TreeNode methods when applicable (simply by checking151* instanceof a node). Bins of TreeNodes may be traversed and152* used like any others, but additionally support faster lookup153* when overpopulated. However, since the vast majority of bins in154* normal use are not overpopulated, checking for existence of155* tree bins may be delayed in the course of table methods.156*157* Tree bins (i.e., bins whose elements are all TreeNodes) are158* ordered primarily by hashCode, but in the case of ties, if two159* elements are of the same "class C implements Comparable<C>",160* type then their compareTo method is used for ordering. (We161* conservatively check generic types via reflection to validate162* this -- see method comparableClassFor). The added complexity163* of tree bins is worthwhile in providing worst-case O(log n)164* operations when keys either have distinct hashes or are165* orderable, Thus, performance degrades gracefully under166* accidental or malicious usages in which hashCode() methods167* return values that are poorly distributed, as well as those in168* which many keys share a hashCode, so long as they are also169* Comparable. (If neither of these apply, we may waste about a170* factor of two in time and space compared to taking no171* precautions. But the only known cases stem from poor user172* programming practices that are already so slow that this makes173* little difference.)174*175* Because TreeNodes are about twice the size of regular nodes, we176* use them only when bins contain enough nodes to warrant use177* (see TREEIFY_THRESHOLD). And when they become too small (due to178* removal or resizing) they are converted back to plain bins. In179* usages with well-distributed user hashCodes, tree bins are180* rarely used. Ideally, under random hashCodes, the frequency of181* nodes in bins follows a Poisson distribution182* (http://en.wikipedia.org/wiki/Poisson_distribution) with a183* parameter of about 0.5 on average for the default resizing184* threshold of 0.75, although with a large variance because of185* resizing granularity. Ignoring variance, the expected186* occurrences of list size k are (exp(-0.5) * pow(0.5, k) /187* factorial(k)). The first values are:188*189* 0: 0.60653066190* 1: 0.30326533191* 2: 0.07581633192* 3: 0.01263606193* 4: 0.00157952194* 5: 0.00015795195* 6: 0.00001316196* 7: 0.00000094197* 8: 0.00000006198* more: less than 1 in ten million199*200* The root of a tree bin is normally its first node. However,201* sometimes (currently only upon Iterator.remove), the root might202* be elsewhere, but can be recovered following parent links203* (method TreeNode.root()).204*205* All applicable internal methods accept a hash code as an206* argument (as normally supplied from a public method), allowing207* them to call each other without recomputing user hashCodes.208* Most internal methods also accept a "tab" argument, that is209* normally the current table, but may be a new or old one when210* resizing or converting.211*212* When bin lists are treeified, split, or untreeified, we keep213* them in the same relative access/traversal order (i.e., field214* Node.next) to better preserve locality, and to slightly215* simplify handling of splits and traversals that invoke216* iterator.remove. When using comparators on insertion, to keep a217* total ordering (or as close as is required here) across218* rebalancings, we compare classes and identityHashCodes as219* tie-breakers.220*221* The use and transitions among plain vs tree modes is222* complicated by the existence of subclass LinkedHashMap. See223* below for hook methods defined to be invoked upon insertion,224* removal and access that allow LinkedHashMap internals to225* otherwise remain independent of these mechanics. (This also226* requires that a map instance be passed to some utility methods227* that may create new nodes.)228*229* The concurrent-programming-like SSA-based coding style helps230* avoid aliasing errors amid all of the twisty pointer operations.231*/232233/**234* The default initial capacity - MUST be a power of two.235*/236static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16237238/**239* The maximum capacity, used if a higher value is implicitly specified240* by either of the constructors with arguments.241* MUST be a power of two <= 1<<30.242*/243static final int MAXIMUM_CAPACITY = 1 << 30;244245/**246* The load factor used when none specified in constructor.247*/248static final float DEFAULT_LOAD_FACTOR = 0.75f;249250/**251* The bin count threshold for using a tree rather than list for a252* bin. Bins are converted to trees when adding an element to a253* bin with at least this many nodes. The value must be greater254* than 2 and should be at least 8 to mesh with assumptions in255* tree removal about conversion back to plain bins upon256* shrinkage.257*/258static final int TREEIFY_THRESHOLD = 8;259260/**261* The bin count threshold for untreeifying a (split) bin during a262* resize operation. Should be less than TREEIFY_THRESHOLD, and at263* most 6 to mesh with shrinkage detection under removal.264*/265static final int UNTREEIFY_THRESHOLD = 6;266267/**268* The smallest table capacity for which bins may be treeified.269* (Otherwise the table is resized if too many nodes in a bin.)270* Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts271* between resizing and treeification thresholds.272*/273static final int MIN_TREEIFY_CAPACITY = 64;274275/**276* Basic hash bin node, used for most entries. (See below for277* TreeNode subclass, and in LinkedHashMap for its Entry subclass.)278*/279static class Node<K,V> implements Map.Entry<K,V> {280final int hash;281final K key;282V value;283Node<K,V> next;284285Node(int hash, K key, V value, Node<K,V> next) {286this.hash = hash;287this.key = key;288this.value = value;289this.next = next;290}291292public final K getKey() { return key; }293public final V getValue() { return value; }294public final String toString() { return key + "=" + value; }295296public final int hashCode() {297return Objects.hashCode(key) ^ Objects.hashCode(value);298}299300public final V setValue(V newValue) {301V oldValue = value;302value = newValue;303return oldValue;304}305306public final boolean equals(Object o) {307if (o == this)308return true;309310return o instanceof Map.Entry<?, ?> e311&& Objects.equals(key, e.getKey())312&& Objects.equals(value, e.getValue());313}314}315316/* ---------------- Static utilities -------------- */317318/**319* Computes key.hashCode() and spreads (XORs) higher bits of hash320* to lower. Because the table uses power-of-two masking, sets of321* hashes that vary only in bits above the current mask will322* always collide. (Among known examples are sets of Float keys323* holding consecutive whole numbers in small tables.) So we324* apply a transform that spreads the impact of higher bits325* downward. There is a tradeoff between speed, utility, and326* quality of bit-spreading. Because many common sets of hashes327* are already reasonably distributed (so don't benefit from328* spreading), and because we use trees to handle large sets of329* collisions in bins, we just XOR some shifted bits in the330* cheapest possible way to reduce systematic lossage, as well as331* to incorporate impact of the highest bits that would otherwise332* never be used in index calculations because of table bounds.333*/334static final int hash(Object key) {335int h;336return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);337}338339/**340* Returns x's Class if it is of the form "class C implements341* Comparable<C>", else null.342*/343static Class<?> comparableClassFor(Object x) {344if (x instanceof Comparable) {345Class<?> c; Type[] ts, as; ParameterizedType p;346if ((c = x.getClass()) == String.class) // bypass checks347return c;348if ((ts = c.getGenericInterfaces()) != null) {349for (Type t : ts) {350if ((t instanceof ParameterizedType) &&351((p = (ParameterizedType) t).getRawType() ==352Comparable.class) &&353(as = p.getActualTypeArguments()) != null &&354as.length == 1 && as[0] == c) // type arg is c355return c;356}357}358}359return null;360}361362/**363* Returns k.compareTo(x) if x matches kc (k's screened comparable364* class), else 0.365*/366@SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable367static int compareComparables(Class<?> kc, Object k, Object x) {368return (x == null || x.getClass() != kc ? 0 :369((Comparable)k).compareTo(x));370}371372/**373* Returns a power of two size for the given target capacity.374*/375static final int tableSizeFor(int cap) {376int n = -1 >>> Integer.numberOfLeadingZeros(cap - 1);377return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;378}379380/* ---------------- Fields -------------- */381382/**383* The table, initialized on first use, and resized as384* necessary. When allocated, length is always a power of two.385* (We also tolerate length zero in some operations to allow386* bootstrapping mechanics that are currently not needed.)387*/388transient Node<K,V>[] table;389390/**391* Holds cached entrySet(). Note that AbstractMap fields are used392* for keySet() and values().393*/394transient Set<Map.Entry<K,V>> entrySet;395396/**397* The number of key-value mappings contained in this map.398*/399transient int size;400401/**402* The number of times this HashMap has been structurally modified403* Structural modifications are those that change the number of mappings in404* the HashMap or otherwise modify its internal structure (e.g.,405* rehash). This field is used to make iterators on Collection-views of406* the HashMap fail-fast. (See ConcurrentModificationException).407*/408transient int modCount;409410/**411* The next size value at which to resize (capacity * load factor).412*413* @serial414*/415// (The javadoc description is true upon serialization.416// Additionally, if the table array has not been allocated, this417// field holds the initial array capacity, or zero signifying418// DEFAULT_INITIAL_CAPACITY.)419int threshold;420421/**422* The load factor for the hash table.423*424* @serial425*/426final float loadFactor;427428/* ---------------- Public operations -------------- */429430/**431* Constructs an empty {@code HashMap} with the specified initial432* capacity and load factor.433*434* @param initialCapacity the initial capacity435* @param loadFactor the load factor436* @throws IllegalArgumentException if the initial capacity is negative437* or the load factor is nonpositive438*/439public HashMap(int initialCapacity, float loadFactor) {440if (initialCapacity < 0)441throw new IllegalArgumentException("Illegal initial capacity: " +442initialCapacity);443if (initialCapacity > MAXIMUM_CAPACITY)444initialCapacity = MAXIMUM_CAPACITY;445if (loadFactor <= 0 || Float.isNaN(loadFactor))446throw new IllegalArgumentException("Illegal load factor: " +447loadFactor);448this.loadFactor = loadFactor;449this.threshold = tableSizeFor(initialCapacity);450}451452/**453* Constructs an empty {@code HashMap} with the specified initial454* capacity and the default load factor (0.75).455*456* @param initialCapacity the initial capacity.457* @throws IllegalArgumentException if the initial capacity is negative.458*/459public HashMap(int initialCapacity) {460this(initialCapacity, DEFAULT_LOAD_FACTOR);461}462463/**464* Constructs an empty {@code HashMap} with the default initial capacity465* (16) and the default load factor (0.75).466*/467public HashMap() {468this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted469}470471/**472* Constructs a new {@code HashMap} with the same mappings as the473* specified {@code Map}. The {@code HashMap} is created with474* default load factor (0.75) and an initial capacity sufficient to475* hold the mappings in the specified {@code Map}.476*477* @param m the map whose mappings are to be placed in this map478* @throws NullPointerException if the specified map is null479*/480public HashMap(Map<? extends K, ? extends V> m) {481this.loadFactor = DEFAULT_LOAD_FACTOR;482putMapEntries(m, false);483}484485/**486* Implements Map.putAll and Map constructor.487*488* @param m the map489* @param evict false when initially constructing this map, else490* true (relayed to method afterNodeInsertion).491*/492final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {493int s = m.size();494if (s > 0) {495if (table == null) { // pre-size496float ft = ((float)s / loadFactor) + 1.0F;497int t = ((ft < (float)MAXIMUM_CAPACITY) ?498(int)ft : MAXIMUM_CAPACITY);499if (t > threshold)500threshold = tableSizeFor(t);501} else {502// Because of linked-list bucket constraints, we cannot503// expand all at once, but can reduce total resize504// effort by repeated doubling now vs later505while (s > threshold && table.length < MAXIMUM_CAPACITY)506resize();507}508509for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {510K key = e.getKey();511V value = e.getValue();512putVal(hash(key), key, value, false, evict);513}514}515}516517/**518* Returns the number of key-value mappings in this map.519*520* @return the number of key-value mappings in this map521*/522public int size() {523return size;524}525526/**527* Returns {@code true} if this map contains no key-value mappings.528*529* @return {@code true} if this map contains no key-value mappings530*/531public boolean isEmpty() {532return size == 0;533}534535/**536* Returns the value to which the specified key is mapped,537* or {@code null} if this map contains no mapping for the key.538*539* <p>More formally, if this map contains a mapping from a key540* {@code k} to a value {@code v} such that {@code (key==null ? k==null :541* key.equals(k))}, then this method returns {@code v}; otherwise542* it returns {@code null}. (There can be at most one such mapping.)543*544* <p>A return value of {@code null} does not <i>necessarily</i>545* indicate that the map contains no mapping for the key; it's also546* possible that the map explicitly maps the key to {@code null}.547* The {@link #containsKey containsKey} operation may be used to548* distinguish these two cases.549*550* @see #put(Object, Object)551*/552public V get(Object key) {553Node<K,V> e;554return (e = getNode(key)) == null ? null : e.value;555}556557/**558* Implements Map.get and related methods.559*560* @param key the key561* @return the node, or null if none562*/563final Node<K,V> getNode(Object key) {564Node<K,V>[] tab; Node<K,V> first, e; int n, hash; K k;565if ((tab = table) != null && (n = tab.length) > 0 &&566(first = tab[(n - 1) & (hash = hash(key))]) != null) {567if (first.hash == hash && // always check first node568((k = first.key) == key || (key != null && key.equals(k))))569return first;570if ((e = first.next) != null) {571if (first instanceof TreeNode)572return ((TreeNode<K,V>)first).getTreeNode(hash, key);573do {574if (e.hash == hash &&575((k = e.key) == key || (key != null && key.equals(k))))576return e;577} while ((e = e.next) != null);578}579}580return null;581}582583/**584* Returns {@code true} if this map contains a mapping for the585* specified key.586*587* @param key The key whose presence in this map is to be tested588* @return {@code true} if this map contains a mapping for the specified589* key.590*/591public boolean containsKey(Object key) {592return getNode(key) != null;593}594595/**596* Associates the specified value with the specified key in this map.597* If the map previously contained a mapping for the key, the old598* value is replaced.599*600* @param key key with which the specified value is to be associated601* @param value value to be associated with the specified key602* @return the previous value associated with {@code key}, or603* {@code null} if there was no mapping for {@code key}.604* (A {@code null} return can also indicate that the map605* previously associated {@code null} with {@code key}.)606*/607public V put(K key, V value) {608return putVal(hash(key), key, value, false, true);609}610611/**612* Implements Map.put and related methods.613*614* @param hash hash for key615* @param key the key616* @param value the value to put617* @param onlyIfAbsent if true, don't change existing value618* @param evict if false, the table is in creation mode.619* @return previous value, or null if none620*/621final V putVal(int hash, K key, V value, boolean onlyIfAbsent,622boolean evict) {623Node<K,V>[] tab; Node<K,V> p; int n, i;624if ((tab = table) == null || (n = tab.length) == 0)625n = (tab = resize()).length;626if ((p = tab[i = (n - 1) & hash]) == null)627tab[i] = newNode(hash, key, value, null);628else {629Node<K,V> e; K k;630if (p.hash == hash &&631((k = p.key) == key || (key != null && key.equals(k))))632e = p;633else if (p instanceof TreeNode)634e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);635else {636for (int binCount = 0; ; ++binCount) {637if ((e = p.next) == null) {638p.next = newNode(hash, key, value, null);639if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st640treeifyBin(tab, hash);641break;642}643if (e.hash == hash &&644((k = e.key) == key || (key != null && key.equals(k))))645break;646p = e;647}648}649if (e != null) { // existing mapping for key650V oldValue = e.value;651if (!onlyIfAbsent || oldValue == null)652e.value = value;653afterNodeAccess(e);654return oldValue;655}656}657++modCount;658if (++size > threshold)659resize();660afterNodeInsertion(evict);661return null;662}663664/**665* Initializes or doubles table size. If null, allocates in666* accord with initial capacity target held in field threshold.667* Otherwise, because we are using power-of-two expansion, the668* elements from each bin must either stay at same index, or move669* with a power of two offset in the new table.670*671* @return the table672*/673final Node<K,V>[] resize() {674Node<K,V>[] oldTab = table;675int oldCap = (oldTab == null) ? 0 : oldTab.length;676int oldThr = threshold;677int newCap, newThr = 0;678if (oldCap > 0) {679if (oldCap >= MAXIMUM_CAPACITY) {680threshold = Integer.MAX_VALUE;681return oldTab;682}683else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&684oldCap >= DEFAULT_INITIAL_CAPACITY)685newThr = oldThr << 1; // double threshold686}687else if (oldThr > 0) // initial capacity was placed in threshold688newCap = oldThr;689else { // zero initial threshold signifies using defaults690newCap = DEFAULT_INITIAL_CAPACITY;691newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);692}693if (newThr == 0) {694float ft = (float)newCap * loadFactor;695newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?696(int)ft : Integer.MAX_VALUE);697}698threshold = newThr;699@SuppressWarnings({"rawtypes","unchecked"})700Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];701table = newTab;702if (oldTab != null) {703for (int j = 0; j < oldCap; ++j) {704Node<K,V> e;705if ((e = oldTab[j]) != null) {706oldTab[j] = null;707if (e.next == null)708newTab[e.hash & (newCap - 1)] = e;709else if (e instanceof TreeNode)710((TreeNode<K,V>)e).split(this, newTab, j, oldCap);711else { // preserve order712Node<K,V> loHead = null, loTail = null;713Node<K,V> hiHead = null, hiTail = null;714Node<K,V> next;715do {716next = e.next;717if ((e.hash & oldCap) == 0) {718if (loTail == null)719loHead = e;720else721loTail.next = e;722loTail = e;723}724else {725if (hiTail == null)726hiHead = e;727else728hiTail.next = e;729hiTail = e;730}731} while ((e = next) != null);732if (loTail != null) {733loTail.next = null;734newTab[j] = loHead;735}736if (hiTail != null) {737hiTail.next = null;738newTab[j + oldCap] = hiHead;739}740}741}742}743}744return newTab;745}746747/**748* Replaces all linked nodes in bin at index for given hash unless749* table is too small, in which case resizes instead.750*/751final void treeifyBin(Node<K,V>[] tab, int hash) {752int n, index; Node<K,V> e;753if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)754resize();755else if ((e = tab[index = (n - 1) & hash]) != null) {756TreeNode<K,V> hd = null, tl = null;757do {758TreeNode<K,V> p = replacementTreeNode(e, null);759if (tl == null)760hd = p;761else {762p.prev = tl;763tl.next = p;764}765tl = p;766} while ((e = e.next) != null);767if ((tab[index] = hd) != null)768hd.treeify(tab);769}770}771772/**773* Copies all of the mappings from the specified map to this map.774* These mappings will replace any mappings that this map had for775* any of the keys currently in the specified map.776*777* @param m mappings to be stored in this map778* @throws NullPointerException if the specified map is null779*/780public void putAll(Map<? extends K, ? extends V> m) {781putMapEntries(m, true);782}783784/**785* Removes the mapping for the specified key from this map if present.786*787* @param key key whose mapping is to be removed from the map788* @return the previous value associated with {@code key}, or789* {@code null} if there was no mapping for {@code key}.790* (A {@code null} return can also indicate that the map791* previously associated {@code null} with {@code key}.)792*/793public V remove(Object key) {794Node<K,V> e;795return (e = removeNode(hash(key), key, null, false, true)) == null ?796null : e.value;797}798799/**800* Implements Map.remove and related methods.801*802* @param hash hash for key803* @param key the key804* @param value the value to match if matchValue, else ignored805* @param matchValue if true only remove if value is equal806* @param movable if false do not move other nodes while removing807* @return the node, or null if none808*/809final Node<K,V> removeNode(int hash, Object key, Object value,810boolean matchValue, boolean movable) {811Node<K,V>[] tab; Node<K,V> p; int n, index;812if ((tab = table) != null && (n = tab.length) > 0 &&813(p = tab[index = (n - 1) & hash]) != null) {814Node<K,V> node = null, e; K k; V v;815if (p.hash == hash &&816((k = p.key) == key || (key != null && key.equals(k))))817node = p;818else if ((e = p.next) != null) {819if (p instanceof TreeNode)820node = ((TreeNode<K,V>)p).getTreeNode(hash, key);821else {822do {823if (e.hash == hash &&824((k = e.key) == key ||825(key != null && key.equals(k)))) {826node = e;827break;828}829p = e;830} while ((e = e.next) != null);831}832}833if (node != null && (!matchValue || (v = node.value) == value ||834(value != null && value.equals(v)))) {835if (node instanceof TreeNode)836((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);837else if (node == p)838tab[index] = node.next;839else840p.next = node.next;841++modCount;842--size;843afterNodeRemoval(node);844return node;845}846}847return null;848}849850/**851* Removes all of the mappings from this map.852* The map will be empty after this call returns.853*/854public void clear() {855Node<K,V>[] tab;856modCount++;857if ((tab = table) != null && size > 0) {858size = 0;859for (int i = 0; i < tab.length; ++i)860tab[i] = null;861}862}863864/**865* Returns {@code true} if this map maps one or more keys to the866* specified value.867*868* @param value value whose presence in this map is to be tested869* @return {@code true} if this map maps one or more keys to the870* specified value871*/872public boolean containsValue(Object value) {873Node<K,V>[] tab; V v;874if ((tab = table) != null && size > 0) {875for (Node<K,V> e : tab) {876for (; e != null; e = e.next) {877if ((v = e.value) == value ||878(value != null && value.equals(v)))879return true;880}881}882}883return false;884}885886/**887* Returns a {@link Set} view of the keys contained in this map.888* The set is backed by the map, so changes to the map are889* reflected in the set, and vice-versa. If the map is modified890* while an iteration over the set is in progress (except through891* the iterator's own {@code remove} operation), the results of892* the iteration are undefined. The set supports element removal,893* which removes the corresponding mapping from the map, via the894* {@code Iterator.remove}, {@code Set.remove},895* {@code removeAll}, {@code retainAll}, and {@code clear}896* operations. It does not support the {@code add} or {@code addAll}897* operations.898*899* @return a set view of the keys contained in this map900*/901public Set<K> keySet() {902Set<K> ks = keySet;903if (ks == null) {904ks = new KeySet();905keySet = ks;906}907return ks;908}909910/**911* Prepares the array for {@link Collection#toArray(Object[])} implementation.912* If supplied array is smaller than this map size, a new array is allocated.913* If supplied array is bigger than this map size, a null is written at size index.914*915* @param a an original array passed to {@code toArray()} method916* @param <T> type of array elements917* @return an array ready to be filled and returned from {@code toArray()} method.918*/919@SuppressWarnings("unchecked")920final <T> T[] prepareArray(T[] a) {921int size = this.size;922if (a.length < size) {923return (T[]) java.lang.reflect.Array924.newInstance(a.getClass().getComponentType(), size);925}926if (a.length > size) {927a[size] = null;928}929return a;930}931932/**933* Fills an array with this map keys and returns it. This method assumes934* that input array is big enough to fit all the keys. Use935* {@link #prepareArray(Object[])} to ensure this.936*937* @param a an array to fill938* @param <T> type of array elements939* @return supplied array940*/941<T> T[] keysToArray(T[] a) {942Object[] r = a;943Node<K,V>[] tab;944int idx = 0;945if (size > 0 && (tab = table) != null) {946for (Node<K,V> e : tab) {947for (; e != null; e = e.next) {948r[idx++] = e.key;949}950}951}952return a;953}954955/**956* Fills an array with this map values and returns it. This method assumes957* that input array is big enough to fit all the values. Use958* {@link #prepareArray(Object[])} to ensure this.959*960* @param a an array to fill961* @param <T> type of array elements962* @return supplied array963*/964<T> T[] valuesToArray(T[] a) {965Object[] r = a;966Node<K,V>[] tab;967int idx = 0;968if (size > 0 && (tab = table) != null) {969for (Node<K,V> e : tab) {970for (; e != null; e = e.next) {971r[idx++] = e.value;972}973}974}975return a;976}977978final class KeySet extends AbstractSet<K> {979public final int size() { return size; }980public final void clear() { HashMap.this.clear(); }981public final Iterator<K> iterator() { return new KeyIterator(); }982public final boolean contains(Object o) { return containsKey(o); }983public final boolean remove(Object key) {984return removeNode(hash(key), key, null, false, true) != null;985}986public final Spliterator<K> spliterator() {987return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0);988}989990public Object[] toArray() {991return keysToArray(new Object[size]);992}993994public <T> T[] toArray(T[] a) {995return keysToArray(prepareArray(a));996}997998public final void forEach(Consumer<? super K> action) {999Node<K,V>[] tab;1000if (action == null)1001throw new NullPointerException();1002if (size > 0 && (tab = table) != null) {1003int mc = modCount;1004for (Node<K,V> e : tab) {1005for (; e != null; e = e.next)1006action.accept(e.key);1007}1008if (modCount != mc)1009throw new ConcurrentModificationException();1010}1011}1012}10131014/**1015* Returns a {@link Collection} view of the values contained in this map.1016* The collection is backed by the map, so changes to the map are1017* reflected in the collection, and vice-versa. If the map is1018* modified while an iteration over the collection is in progress1019* (except through the iterator's own {@code remove} operation),1020* the results of the iteration are undefined. The collection1021* supports element removal, which removes the corresponding1022* mapping from the map, via the {@code Iterator.remove},1023* {@code Collection.remove}, {@code removeAll},1024* {@code retainAll} and {@code clear} operations. It does not1025* support the {@code add} or {@code addAll} operations.1026*1027* @return a view of the values contained in this map1028*/1029public Collection<V> values() {1030Collection<V> vs = values;1031if (vs == null) {1032vs = new Values();1033values = vs;1034}1035return vs;1036}10371038final class Values extends AbstractCollection<V> {1039public final int size() { return size; }1040public final void clear() { HashMap.this.clear(); }1041public final Iterator<V> iterator() { return new ValueIterator(); }1042public final boolean contains(Object o) { return containsValue(o); }1043public final Spliterator<V> spliterator() {1044return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0);1045}10461047public Object[] toArray() {1048return valuesToArray(new Object[size]);1049}10501051public <T> T[] toArray(T[] a) {1052return valuesToArray(prepareArray(a));1053}10541055public final void forEach(Consumer<? super V> action) {1056Node<K,V>[] tab;1057if (action == null)1058throw new NullPointerException();1059if (size > 0 && (tab = table) != null) {1060int mc = modCount;1061for (Node<K,V> e : tab) {1062for (; e != null; e = e.next)1063action.accept(e.value);1064}1065if (modCount != mc)1066throw new ConcurrentModificationException();1067}1068}1069}10701071/**1072* Returns a {@link Set} view of the mappings contained in this map.1073* The set is backed by the map, so changes to the map are1074* reflected in the set, and vice-versa. If the map is modified1075* while an iteration over the set is in progress (except through1076* the iterator's own {@code remove} operation, or through the1077* {@code setValue} operation on a map entry returned by the1078* iterator) the results of the iteration are undefined. The set1079* supports element removal, which removes the corresponding1080* mapping from the map, via the {@code Iterator.remove},1081* {@code Set.remove}, {@code removeAll}, {@code retainAll} and1082* {@code clear} operations. It does not support the1083* {@code add} or {@code addAll} operations.1084*1085* @return a set view of the mappings contained in this map1086*/1087public Set<Map.Entry<K,V>> entrySet() {1088Set<Map.Entry<K,V>> es;1089return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;1090}10911092final class EntrySet extends AbstractSet<Map.Entry<K,V>> {1093public final int size() { return size; }1094public final void clear() { HashMap.this.clear(); }1095public final Iterator<Map.Entry<K,V>> iterator() {1096return new EntryIterator();1097}1098public final boolean contains(Object o) {1099if (!(o instanceof Map.Entry<?, ?> e))1100return false;1101Object key = e.getKey();1102Node<K,V> candidate = getNode(key);1103return candidate != null && candidate.equals(e);1104}1105public final boolean remove(Object o) {1106if (o instanceof Map.Entry<?, ?> e) {1107Object key = e.getKey();1108Object value = e.getValue();1109return removeNode(hash(key), key, value, true, true) != null;1110}1111return false;1112}1113public final Spliterator<Map.Entry<K,V>> spliterator() {1114return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);1115}1116public final void forEach(Consumer<? super Map.Entry<K,V>> action) {1117Node<K,V>[] tab;1118if (action == null)1119throw new NullPointerException();1120if (size > 0 && (tab = table) != null) {1121int mc = modCount;1122for (Node<K,V> e : tab) {1123for (; e != null; e = e.next)1124action.accept(e);1125}1126if (modCount != mc)1127throw new ConcurrentModificationException();1128}1129}1130}11311132// Overrides of JDK8 Map extension methods11331134@Override1135public V getOrDefault(Object key, V defaultValue) {1136Node<K,V> e;1137return (e = getNode(key)) == null ? defaultValue : e.value;1138}11391140@Override1141public V putIfAbsent(K key, V value) {1142return putVal(hash(key), key, value, true, true);1143}11441145@Override1146public boolean remove(Object key, Object value) {1147return removeNode(hash(key), key, value, true, true) != null;1148}11491150@Override1151public boolean replace(K key, V oldValue, V newValue) {1152Node<K,V> e; V v;1153if ((e = getNode(key)) != null &&1154((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {1155e.value = newValue;1156afterNodeAccess(e);1157return true;1158}1159return false;1160}11611162@Override1163public V replace(K key, V value) {1164Node<K,V> e;1165if ((e = getNode(key)) != null) {1166V oldValue = e.value;1167e.value = value;1168afterNodeAccess(e);1169return oldValue;1170}1171return null;1172}11731174/**1175* {@inheritDoc}1176*1177* <p>This method will, on a best-effort basis, throw a1178* {@link ConcurrentModificationException} if it is detected that the1179* mapping function modifies this map during computation.1180*1181* @throws ConcurrentModificationException if it is detected that the1182* mapping function modified this map1183*/1184@Override1185public V computeIfAbsent(K key,1186Function<? super K, ? extends V> mappingFunction) {1187if (mappingFunction == null)1188throw new NullPointerException();1189int hash = hash(key);1190Node<K,V>[] tab; Node<K,V> first; int n, i;1191int binCount = 0;1192TreeNode<K,V> t = null;1193Node<K,V> old = null;1194if (size > threshold || (tab = table) == null ||1195(n = tab.length) == 0)1196n = (tab = resize()).length;1197if ((first = tab[i = (n - 1) & hash]) != null) {1198if (first instanceof TreeNode)1199old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);1200else {1201Node<K,V> e = first; K k;1202do {1203if (e.hash == hash &&1204((k = e.key) == key || (key != null && key.equals(k)))) {1205old = e;1206break;1207}1208++binCount;1209} while ((e = e.next) != null);1210}1211V oldValue;1212if (old != null && (oldValue = old.value) != null) {1213afterNodeAccess(old);1214return oldValue;1215}1216}1217int mc = modCount;1218V v = mappingFunction.apply(key);1219if (mc != modCount) { throw new ConcurrentModificationException(); }1220if (v == null) {1221return null;1222} else if (old != null) {1223old.value = v;1224afterNodeAccess(old);1225return v;1226}1227else if (t != null)1228t.putTreeVal(this, tab, hash, key, v);1229else {1230tab[i] = newNode(hash, key, v, first);1231if (binCount >= TREEIFY_THRESHOLD - 1)1232treeifyBin(tab, hash);1233}1234modCount = mc + 1;1235++size;1236afterNodeInsertion(true);1237return v;1238}12391240/**1241* {@inheritDoc}1242*1243* <p>This method will, on a best-effort basis, throw a1244* {@link ConcurrentModificationException} if it is detected that the1245* remapping function modifies this map during computation.1246*1247* @throws ConcurrentModificationException if it is detected that the1248* remapping function modified this map1249*/1250@Override1251public V computeIfPresent(K key,1252BiFunction<? super K, ? super V, ? extends V> remappingFunction) {1253if (remappingFunction == null)1254throw new NullPointerException();1255Node<K,V> e; V oldValue;1256if ((e = getNode(key)) != null &&1257(oldValue = e.value) != null) {1258int mc = modCount;1259V v = remappingFunction.apply(key, oldValue);1260if (mc != modCount) { throw new ConcurrentModificationException(); }1261if (v != null) {1262e.value = v;1263afterNodeAccess(e);1264return v;1265}1266else {1267int hash = hash(key);1268removeNode(hash, key, null, false, true);1269}1270}1271return null;1272}12731274/**1275* {@inheritDoc}1276*1277* <p>This method will, on a best-effort basis, throw a1278* {@link ConcurrentModificationException} if it is detected that the1279* remapping function modifies this map during computation.1280*1281* @throws ConcurrentModificationException if it is detected that the1282* remapping function modified this map1283*/1284@Override1285public V compute(K key,1286BiFunction<? super K, ? super V, ? extends V> remappingFunction) {1287if (remappingFunction == null)1288throw new NullPointerException();1289int hash = hash(key);1290Node<K,V>[] tab; Node<K,V> first; int n, i;1291int binCount = 0;1292TreeNode<K,V> t = null;1293Node<K,V> old = null;1294if (size > threshold || (tab = table) == null ||1295(n = tab.length) == 0)1296n = (tab = resize()).length;1297if ((first = tab[i = (n - 1) & hash]) != null) {1298if (first instanceof TreeNode)1299old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);1300else {1301Node<K,V> e = first; K k;1302do {1303if (e.hash == hash &&1304((k = e.key) == key || (key != null && key.equals(k)))) {1305old = e;1306break;1307}1308++binCount;1309} while ((e = e.next) != null);1310}1311}1312V oldValue = (old == null) ? null : old.value;1313int mc = modCount;1314V v = remappingFunction.apply(key, oldValue);1315if (mc != modCount) { throw new ConcurrentModificationException(); }1316if (old != null) {1317if (v != null) {1318old.value = v;1319afterNodeAccess(old);1320}1321else1322removeNode(hash, key, null, false, true);1323}1324else if (v != null) {1325if (t != null)1326t.putTreeVal(this, tab, hash, key, v);1327else {1328tab[i] = newNode(hash, key, v, first);1329if (binCount >= TREEIFY_THRESHOLD - 1)1330treeifyBin(tab, hash);1331}1332modCount = mc + 1;1333++size;1334afterNodeInsertion(true);1335}1336return v;1337}13381339/**1340* {@inheritDoc}1341*1342* <p>This method will, on a best-effort basis, throw a1343* {@link ConcurrentModificationException} if it is detected that the1344* remapping function modifies this map during computation.1345*1346* @throws ConcurrentModificationException if it is detected that the1347* remapping function modified this map1348*/1349@Override1350public V merge(K key, V value,1351BiFunction<? super V, ? super V, ? extends V> remappingFunction) {1352if (value == null || remappingFunction == null)1353throw new NullPointerException();1354int hash = hash(key);1355Node<K,V>[] tab; Node<K,V> first; int n, i;1356int binCount = 0;1357TreeNode<K,V> t = null;1358Node<K,V> old = null;1359if (size > threshold || (tab = table) == null ||1360(n = tab.length) == 0)1361n = (tab = resize()).length;1362if ((first = tab[i = (n - 1) & hash]) != null) {1363if (first instanceof TreeNode)1364old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);1365else {1366Node<K,V> e = first; K k;1367do {1368if (e.hash == hash &&1369((k = e.key) == key || (key != null && key.equals(k)))) {1370old = e;1371break;1372}1373++binCount;1374} while ((e = e.next) != null);1375}1376}1377if (old != null) {1378V v;1379if (old.value != null) {1380int mc = modCount;1381v = remappingFunction.apply(old.value, value);1382if (mc != modCount) {1383throw new ConcurrentModificationException();1384}1385} else {1386v = value;1387}1388if (v != null) {1389old.value = v;1390afterNodeAccess(old);1391}1392else1393removeNode(hash, key, null, false, true);1394return v;1395} else {1396if (t != null)1397t.putTreeVal(this, tab, hash, key, value);1398else {1399tab[i] = newNode(hash, key, value, first);1400if (binCount >= TREEIFY_THRESHOLD - 1)1401treeifyBin(tab, hash);1402}1403++modCount;1404++size;1405afterNodeInsertion(true);1406return value;1407}1408}14091410@Override1411public void forEach(BiConsumer<? super K, ? super V> action) {1412Node<K,V>[] tab;1413if (action == null)1414throw new NullPointerException();1415if (size > 0 && (tab = table) != null) {1416int mc = modCount;1417for (Node<K,V> e : tab) {1418for (; e != null; e = e.next)1419action.accept(e.key, e.value);1420}1421if (modCount != mc)1422throw new ConcurrentModificationException();1423}1424}14251426@Override1427public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {1428Node<K,V>[] tab;1429if (function == null)1430throw new NullPointerException();1431if (size > 0 && (tab = table) != null) {1432int mc = modCount;1433for (Node<K,V> e : tab) {1434for (; e != null; e = e.next) {1435e.value = function.apply(e.key, e.value);1436}1437}1438if (modCount != mc)1439throw new ConcurrentModificationException();1440}1441}14421443/* ------------------------------------------------------------ */1444// Cloning and serialization14451446/**1447* Returns a shallow copy of this {@code HashMap} instance: the keys and1448* values themselves are not cloned.1449*1450* @return a shallow copy of this map1451*/1452@SuppressWarnings("unchecked")1453@Override1454public Object clone() {1455HashMap<K,V> result;1456try {1457result = (HashMap<K,V>)super.clone();1458} catch (CloneNotSupportedException e) {1459// this shouldn't happen, since we are Cloneable1460throw new InternalError(e);1461}1462result.reinitialize();1463result.putMapEntries(this, false);1464return result;1465}14661467// These methods are also used when serializing HashSets1468final float loadFactor() { return loadFactor; }1469final int capacity() {1470return (table != null) ? table.length :1471(threshold > 0) ? threshold :1472DEFAULT_INITIAL_CAPACITY;1473}14741475/**1476* Saves this map to a stream (that is, serializes it).1477*1478* @param s the stream1479* @throws IOException if an I/O error occurs1480* @serialData The <i>capacity</i> of the HashMap (the length of the1481* bucket array) is emitted (int), followed by the1482* <i>size</i> (an int, the number of key-value1483* mappings), followed by the key (Object) and value (Object)1484* for each key-value mapping. The key-value mappings are1485* emitted in no particular order.1486*/1487@java.io.Serial1488private void writeObject(java.io.ObjectOutputStream s)1489throws IOException {1490int buckets = capacity();1491// Write out the threshold, loadfactor, and any hidden stuff1492s.defaultWriteObject();1493s.writeInt(buckets);1494s.writeInt(size);1495internalWriteEntries(s);1496}14971498/**1499* Reconstitutes this map from a stream (that is, deserializes it).1500* @param s the stream1501* @throws ClassNotFoundException if the class of a serialized object1502* could not be found1503* @throws IOException if an I/O error occurs1504*/1505@java.io.Serial1506private void readObject(java.io.ObjectInputStream s)1507throws IOException, ClassNotFoundException {1508// Read in the threshold (ignored), loadfactor, and any hidden stuff1509s.defaultReadObject();1510reinitialize();1511if (loadFactor <= 0 || Float.isNaN(loadFactor))1512throw new InvalidObjectException("Illegal load factor: " +1513loadFactor);1514s.readInt(); // Read and ignore number of buckets1515int mappings = s.readInt(); // Read number of mappings (size)1516if (mappings < 0)1517throw new InvalidObjectException("Illegal mappings count: " +1518mappings);1519else if (mappings > 0) { // (if zero, use defaults)1520// Size the table using given load factor only if within1521// range of 0.25...4.01522float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);1523float fc = (float)mappings / lf + 1.0f;1524int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ?1525DEFAULT_INITIAL_CAPACITY :1526(fc >= MAXIMUM_CAPACITY) ?1527MAXIMUM_CAPACITY :1528tableSizeFor((int)fc));1529float ft = (float)cap * lf;1530threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ?1531(int)ft : Integer.MAX_VALUE);15321533// Check Map.Entry[].class since it's the nearest public type to1534// what we're actually creating.1535SharedSecrets.getJavaObjectInputStreamAccess().checkArray(s, Map.Entry[].class, cap);1536@SuppressWarnings({"rawtypes","unchecked"})1537Node<K,V>[] tab = (Node<K,V>[])new Node[cap];1538table = tab;15391540// Read the keys and values, and put the mappings in the HashMap1541for (int i = 0; i < mappings; i++) {1542@SuppressWarnings("unchecked")1543K key = (K) s.readObject();1544@SuppressWarnings("unchecked")1545V value = (V) s.readObject();1546putVal(hash(key), key, value, false, false);1547}1548}1549}15501551/* ------------------------------------------------------------ */1552// iterators15531554abstract class HashIterator {1555Node<K,V> next; // next entry to return1556Node<K,V> current; // current entry1557int expectedModCount; // for fast-fail1558int index; // current slot15591560HashIterator() {1561expectedModCount = modCount;1562Node<K,V>[] t = table;1563current = next = null;1564index = 0;1565if (t != null && size > 0) { // advance to first entry1566do {} while (index < t.length && (next = t[index++]) == null);1567}1568}15691570public final boolean hasNext() {1571return next != null;1572}15731574final Node<K,V> nextNode() {1575Node<K,V>[] t;1576Node<K,V> e = next;1577if (modCount != expectedModCount)1578throw new ConcurrentModificationException();1579if (e == null)1580throw new NoSuchElementException();1581if ((next = (current = e).next) == null && (t = table) != null) {1582do {} while (index < t.length && (next = t[index++]) == null);1583}1584return e;1585}15861587public final void remove() {1588Node<K,V> p = current;1589if (p == null)1590throw new IllegalStateException();1591if (modCount != expectedModCount)1592throw new ConcurrentModificationException();1593current = null;1594removeNode(p.hash, p.key, null, false, false);1595expectedModCount = modCount;1596}1597}15981599final class KeyIterator extends HashIterator1600implements Iterator<K> {1601public final K next() { return nextNode().key; }1602}16031604final class ValueIterator extends HashIterator1605implements Iterator<V> {1606public final V next() { return nextNode().value; }1607}16081609final class EntryIterator extends HashIterator1610implements Iterator<Map.Entry<K,V>> {1611public final Map.Entry<K,V> next() { return nextNode(); }1612}16131614/* ------------------------------------------------------------ */1615// spliterators16161617static class HashMapSpliterator<K,V> {1618final HashMap<K,V> map;1619Node<K,V> current; // current node1620int index; // current index, modified on advance/split1621int fence; // one past last index1622int est; // size estimate1623int expectedModCount; // for comodification checks16241625HashMapSpliterator(HashMap<K,V> m, int origin,1626int fence, int est,1627int expectedModCount) {1628this.map = m;1629this.index = origin;1630this.fence = fence;1631this.est = est;1632this.expectedModCount = expectedModCount;1633}16341635final int getFence() { // initialize fence and size on first use1636int hi;1637if ((hi = fence) < 0) {1638HashMap<K,V> m = map;1639est = m.size;1640expectedModCount = m.modCount;1641Node<K,V>[] tab = m.table;1642hi = fence = (tab == null) ? 0 : tab.length;1643}1644return hi;1645}16461647public final long estimateSize() {1648getFence(); // force init1649return (long) est;1650}1651}16521653static final class KeySpliterator<K,V>1654extends HashMapSpliterator<K,V>1655implements Spliterator<K> {1656KeySpliterator(HashMap<K,V> m, int origin, int fence, int est,1657int expectedModCount) {1658super(m, origin, fence, est, expectedModCount);1659}16601661public KeySpliterator<K,V> trySplit() {1662int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;1663return (lo >= mid || current != null) ? null :1664new KeySpliterator<>(map, lo, index = mid, est >>>= 1,1665expectedModCount);1666}16671668public void forEachRemaining(Consumer<? super K> action) {1669int i, hi, mc;1670if (action == null)1671throw new NullPointerException();1672HashMap<K,V> m = map;1673Node<K,V>[] tab = m.table;1674if ((hi = fence) < 0) {1675mc = expectedModCount = m.modCount;1676hi = fence = (tab == null) ? 0 : tab.length;1677}1678else1679mc = expectedModCount;1680if (tab != null && tab.length >= hi &&1681(i = index) >= 0 && (i < (index = hi) || current != null)) {1682Node<K,V> p = current;1683current = null;1684do {1685if (p == null)1686p = tab[i++];1687else {1688action.accept(p.key);1689p = p.next;1690}1691} while (p != null || i < hi);1692if (m.modCount != mc)1693throw new ConcurrentModificationException();1694}1695}16961697public boolean tryAdvance(Consumer<? super K> action) {1698int hi;1699if (action == null)1700throw new NullPointerException();1701Node<K,V>[] tab = map.table;1702if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {1703while (current != null || index < hi) {1704if (current == null)1705current = tab[index++];1706else {1707K k = current.key;1708current = current.next;1709action.accept(k);1710if (map.modCount != expectedModCount)1711throw new ConcurrentModificationException();1712return true;1713}1714}1715}1716return false;1717}17181719public int characteristics() {1720return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |1721Spliterator.DISTINCT;1722}1723}17241725static final class ValueSpliterator<K,V>1726extends HashMapSpliterator<K,V>1727implements Spliterator<V> {1728ValueSpliterator(HashMap<K,V> m, int origin, int fence, int est,1729int expectedModCount) {1730super(m, origin, fence, est, expectedModCount);1731}17321733public ValueSpliterator<K,V> trySplit() {1734int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;1735return (lo >= mid || current != null) ? null :1736new ValueSpliterator<>(map, lo, index = mid, est >>>= 1,1737expectedModCount);1738}17391740public void forEachRemaining(Consumer<? super V> action) {1741int i, hi, mc;1742if (action == null)1743throw new NullPointerException();1744HashMap<K,V> m = map;1745Node<K,V>[] tab = m.table;1746if ((hi = fence) < 0) {1747mc = expectedModCount = m.modCount;1748hi = fence = (tab == null) ? 0 : tab.length;1749}1750else1751mc = expectedModCount;1752if (tab != null && tab.length >= hi &&1753(i = index) >= 0 && (i < (index = hi) || current != null)) {1754Node<K,V> p = current;1755current = null;1756do {1757if (p == null)1758p = tab[i++];1759else {1760action.accept(p.value);1761p = p.next;1762}1763} while (p != null || i < hi);1764if (m.modCount != mc)1765throw new ConcurrentModificationException();1766}1767}17681769public boolean tryAdvance(Consumer<? super V> action) {1770int hi;1771if (action == null)1772throw new NullPointerException();1773Node<K,V>[] tab = map.table;1774if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {1775while (current != null || index < hi) {1776if (current == null)1777current = tab[index++];1778else {1779V v = current.value;1780current = current.next;1781action.accept(v);1782if (map.modCount != expectedModCount)1783throw new ConcurrentModificationException();1784return true;1785}1786}1787}1788return false;1789}17901791public int characteristics() {1792return (fence < 0 || est == map.size ? Spliterator.SIZED : 0);1793}1794}17951796static final class EntrySpliterator<K,V>1797extends HashMapSpliterator<K,V>1798implements Spliterator<Map.Entry<K,V>> {1799EntrySpliterator(HashMap<K,V> m, int origin, int fence, int est,1800int expectedModCount) {1801super(m, origin, fence, est, expectedModCount);1802}18031804public EntrySpliterator<K,V> trySplit() {1805int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;1806return (lo >= mid || current != null) ? null :1807new EntrySpliterator<>(map, lo, index = mid, est >>>= 1,1808expectedModCount);1809}18101811public void forEachRemaining(Consumer<? super Map.Entry<K,V>> action) {1812int i, hi, mc;1813if (action == null)1814throw new NullPointerException();1815HashMap<K,V> m = map;1816Node<K,V>[] tab = m.table;1817if ((hi = fence) < 0) {1818mc = expectedModCount = m.modCount;1819hi = fence = (tab == null) ? 0 : tab.length;1820}1821else1822mc = expectedModCount;1823if (tab != null && tab.length >= hi &&1824(i = index) >= 0 && (i < (index = hi) || current != null)) {1825Node<K,V> p = current;1826current = null;1827do {1828if (p == null)1829p = tab[i++];1830else {1831action.accept(p);1832p = p.next;1833}1834} while (p != null || i < hi);1835if (m.modCount != mc)1836throw new ConcurrentModificationException();1837}1838}18391840public boolean tryAdvance(Consumer<? super Map.Entry<K,V>> action) {1841int hi;1842if (action == null)1843throw new NullPointerException();1844Node<K,V>[] tab = map.table;1845if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {1846while (current != null || index < hi) {1847if (current == null)1848current = tab[index++];1849else {1850Node<K,V> e = current;1851current = current.next;1852action.accept(e);1853if (map.modCount != expectedModCount)1854throw new ConcurrentModificationException();1855return true;1856}1857}1858}1859return false;1860}18611862public int characteristics() {1863return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |1864Spliterator.DISTINCT;1865}1866}18671868/* ------------------------------------------------------------ */1869// LinkedHashMap support187018711872/*1873* The following package-protected methods are designed to be1874* overridden by LinkedHashMap, but not by any other subclass.1875* Nearly all other internal methods are also package-protected1876* but are declared final, so can be used by LinkedHashMap, view1877* classes, and HashSet.1878*/18791880// Create a regular (non-tree) node1881Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) {1882return new Node<>(hash, key, value, next);1883}18841885// For conversion from TreeNodes to plain nodes1886Node<K,V> replacementNode(Node<K,V> p, Node<K,V> next) {1887return new Node<>(p.hash, p.key, p.value, next);1888}18891890// Create a tree bin node1891TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next) {1892return new TreeNode<>(hash, key, value, next);1893}18941895// For treeifyBin1896TreeNode<K,V> replacementTreeNode(Node<K,V> p, Node<K,V> next) {1897return new TreeNode<>(p.hash, p.key, p.value, next);1898}18991900/**1901* Reset to initial default state. Called by clone and readObject.1902*/1903void reinitialize() {1904table = null;1905entrySet = null;1906keySet = null;1907values = null;1908modCount = 0;1909threshold = 0;1910size = 0;1911}19121913// Callbacks to allow LinkedHashMap post-actions1914void afterNodeAccess(Node<K,V> p) { }1915void afterNodeInsertion(boolean evict) { }1916void afterNodeRemoval(Node<K,V> p) { }19171918// Called only from writeObject, to ensure compatible ordering.1919void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {1920Node<K,V>[] tab;1921if (size > 0 && (tab = table) != null) {1922for (Node<K,V> e : tab) {1923for (; e != null; e = e.next) {1924s.writeObject(e.key);1925s.writeObject(e.value);1926}1927}1928}1929}19301931/* ------------------------------------------------------------ */1932// Tree bins19331934/**1935* Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn1936* extends Node) so can be used as extension of either regular or1937* linked node.1938*/1939static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {1940TreeNode<K,V> parent; // red-black tree links1941TreeNode<K,V> left;1942TreeNode<K,V> right;1943TreeNode<K,V> prev; // needed to unlink next upon deletion1944boolean red;1945TreeNode(int hash, K key, V val, Node<K,V> next) {1946super(hash, key, val, next);1947}19481949/**1950* Returns root of tree containing this node.1951*/1952final TreeNode<K,V> root() {1953for (TreeNode<K,V> r = this, p;;) {1954if ((p = r.parent) == null)1955return r;1956r = p;1957}1958}19591960/**1961* Ensures that the given root is the first node of its bin.1962*/1963static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) {1964int n;1965if (root != null && tab != null && (n = tab.length) > 0) {1966int index = (n - 1) & root.hash;1967TreeNode<K,V> first = (TreeNode<K,V>)tab[index];1968if (root != first) {1969Node<K,V> rn;1970tab[index] = root;1971TreeNode<K,V> rp = root.prev;1972if ((rn = root.next) != null)1973((TreeNode<K,V>)rn).prev = rp;1974if (rp != null)1975rp.next = rn;1976if (first != null)1977first.prev = root;1978root.next = first;1979root.prev = null;1980}1981assert checkInvariants(root);1982}1983}19841985/**1986* Finds the node starting at root p with the given hash and key.1987* The kc argument caches comparableClassFor(key) upon first use1988* comparing keys.1989*/1990final TreeNode<K,V> find(int h, Object k, Class<?> kc) {1991TreeNode<K,V> p = this;1992do {1993int ph, dir; K pk;1994TreeNode<K,V> pl = p.left, pr = p.right, q;1995if ((ph = p.hash) > h)1996p = pl;1997else if (ph < h)1998p = pr;1999else if ((pk = p.key) == k || (k != null && k.equals(pk)))2000return p;2001else if (pl == null)2002p = pr;2003else if (pr == null)2004p = pl;2005else if ((kc != null ||2006(kc = comparableClassFor(k)) != null) &&2007(dir = compareComparables(kc, k, pk)) != 0)2008p = (dir < 0) ? pl : pr;2009else if ((q = pr.find(h, k, kc)) != null)2010return q;2011else2012p = pl;2013} while (p != null);2014return null;2015}20162017/**2018* Calls find for root node.2019*/2020final TreeNode<K,V> getTreeNode(int h, Object k) {2021return ((parent != null) ? root() : this).find(h, k, null);2022}20232024/**2025* Tie-breaking utility for ordering insertions when equal2026* hashCodes and non-comparable. We don't require a total2027* order, just a consistent insertion rule to maintain2028* equivalence across rebalancings. Tie-breaking further than2029* necessary simplifies testing a bit.2030*/2031static int tieBreakOrder(Object a, Object b) {2032int d;2033if (a == null || b == null ||2034(d = a.getClass().getName().2035compareTo(b.getClass().getName())) == 0)2036d = (System.identityHashCode(a) <= System.identityHashCode(b) ?2037-1 : 1);2038return d;2039}20402041/**2042* Forms tree of the nodes linked from this node.2043*/2044final void treeify(Node<K,V>[] tab) {2045TreeNode<K,V> root = null;2046for (TreeNode<K,V> x = this, next; x != null; x = next) {2047next = (TreeNode<K,V>)x.next;2048x.left = x.right = null;2049if (root == null) {2050x.parent = null;2051x.red = false;2052root = x;2053}2054else {2055K k = x.key;2056int h = x.hash;2057Class<?> kc = null;2058for (TreeNode<K,V> p = root;;) {2059int dir, ph;2060K pk = p.key;2061if ((ph = p.hash) > h)2062dir = -1;2063else if (ph < h)2064dir = 1;2065else if ((kc == null &&2066(kc = comparableClassFor(k)) == null) ||2067(dir = compareComparables(kc, k, pk)) == 0)2068dir = tieBreakOrder(k, pk);20692070TreeNode<K,V> xp = p;2071if ((p = (dir <= 0) ? p.left : p.right) == null) {2072x.parent = xp;2073if (dir <= 0)2074xp.left = x;2075else2076xp.right = x;2077root = balanceInsertion(root, x);2078break;2079}2080}2081}2082}2083moveRootToFront(tab, root);2084}20852086/**2087* Returns a list of non-TreeNodes replacing those linked from2088* this node.2089*/2090final Node<K,V> untreeify(HashMap<K,V> map) {2091Node<K,V> hd = null, tl = null;2092for (Node<K,V> q = this; q != null; q = q.next) {2093Node<K,V> p = map.replacementNode(q, null);2094if (tl == null)2095hd = p;2096else2097tl.next = p;2098tl = p;2099}2100return hd;2101}21022103/**2104* Tree version of putVal.2105*/2106final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,2107int h, K k, V v) {2108Class<?> kc = null;2109boolean searched = false;2110TreeNode<K,V> root = (parent != null) ? root() : this;2111for (TreeNode<K,V> p = root;;) {2112int dir, ph; K pk;2113if ((ph = p.hash) > h)2114dir = -1;2115else if (ph < h)2116dir = 1;2117else if ((pk = p.key) == k || (k != null && k.equals(pk)))2118return p;2119else if ((kc == null &&2120(kc = comparableClassFor(k)) == null) ||2121(dir = compareComparables(kc, k, pk)) == 0) {2122if (!searched) {2123TreeNode<K,V> q, ch;2124searched = true;2125if (((ch = p.left) != null &&2126(q = ch.find(h, k, kc)) != null) ||2127((ch = p.right) != null &&2128(q = ch.find(h, k, kc)) != null))2129return q;2130}2131dir = tieBreakOrder(k, pk);2132}21332134TreeNode<K,V> xp = p;2135if ((p = (dir <= 0) ? p.left : p.right) == null) {2136Node<K,V> xpn = xp.next;2137TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);2138if (dir <= 0)2139xp.left = x;2140else2141xp.right = x;2142xp.next = x;2143x.parent = x.prev = xp;2144if (xpn != null)2145((TreeNode<K,V>)xpn).prev = x;2146moveRootToFront(tab, balanceInsertion(root, x));2147return null;2148}2149}2150}21512152/**2153* Removes the given node, that must be present before this call.2154* This is messier than typical red-black deletion code because we2155* cannot swap the contents of an interior node with a leaf2156* successor that is pinned by "next" pointers that are accessible2157* independently during traversal. So instead we swap the tree2158* linkages. If the current tree appears to have too few nodes,2159* the bin is converted back to a plain bin. (The test triggers2160* somewhere between 2 and 6 nodes, depending on tree structure).2161*/2162final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab,2163boolean movable) {2164int n;2165if (tab == null || (n = tab.length) == 0)2166return;2167int index = (n - 1) & hash;2168TreeNode<K,V> first = (TreeNode<K,V>)tab[index], root = first, rl;2169TreeNode<K,V> succ = (TreeNode<K,V>)next, pred = prev;2170if (pred == null)2171tab[index] = first = succ;2172else2173pred.next = succ;2174if (succ != null)2175succ.prev = pred;2176if (first == null)2177return;2178if (root.parent != null)2179root = root.root();2180if (root == null2181|| (movable2182&& (root.right == null2183|| (rl = root.left) == null2184|| rl.left == null))) {2185tab[index] = first.untreeify(map); // too small2186return;2187}2188TreeNode<K,V> p = this, pl = left, pr = right, replacement;2189if (pl != null && pr != null) {2190TreeNode<K,V> s = pr, sl;2191while ((sl = s.left) != null) // find successor2192s = sl;2193boolean c = s.red; s.red = p.red; p.red = c; // swap colors2194TreeNode<K,V> sr = s.right;2195TreeNode<K,V> pp = p.parent;2196if (s == pr) { // p was s's direct parent2197p.parent = s;2198s.right = p;2199}2200else {2201TreeNode<K,V> sp = s.parent;2202if ((p.parent = sp) != null) {2203if (s == sp.left)2204sp.left = p;2205else2206sp.right = p;2207}2208if ((s.right = pr) != null)2209pr.parent = s;2210}2211p.left = null;2212if ((p.right = sr) != null)2213sr.parent = p;2214if ((s.left = pl) != null)2215pl.parent = s;2216if ((s.parent = pp) == null)2217root = s;2218else if (p == pp.left)2219pp.left = s;2220else2221pp.right = s;2222if (sr != null)2223replacement = sr;2224else2225replacement = p;2226}2227else if (pl != null)2228replacement = pl;2229else if (pr != null)2230replacement = pr;2231else2232replacement = p;2233if (replacement != p) {2234TreeNode<K,V> pp = replacement.parent = p.parent;2235if (pp == null)2236(root = replacement).red = false;2237else if (p == pp.left)2238pp.left = replacement;2239else2240pp.right = replacement;2241p.left = p.right = p.parent = null;2242}22432244TreeNode<K,V> r = p.red ? root : balanceDeletion(root, replacement);22452246if (replacement == p) { // detach2247TreeNode<K,V> pp = p.parent;2248p.parent = null;2249if (pp != null) {2250if (p == pp.left)2251pp.left = null;2252else if (p == pp.right)2253pp.right = null;2254}2255}2256if (movable)2257moveRootToFront(tab, r);2258}22592260/**2261* Splits nodes in a tree bin into lower and upper tree bins,2262* or untreeifies if now too small. Called only from resize;2263* see above discussion about split bits and indices.2264*2265* @param map the map2266* @param tab the table for recording bin heads2267* @param index the index of the table being split2268* @param bit the bit of hash to split on2269*/2270final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {2271TreeNode<K,V> b = this;2272// Relink into lo and hi lists, preserving order2273TreeNode<K,V> loHead = null, loTail = null;2274TreeNode<K,V> hiHead = null, hiTail = null;2275int lc = 0, hc = 0;2276for (TreeNode<K,V> e = b, next; e != null; e = next) {2277next = (TreeNode<K,V>)e.next;2278e.next = null;2279if ((e.hash & bit) == 0) {2280if ((e.prev = loTail) == null)2281loHead = e;2282else2283loTail.next = e;2284loTail = e;2285++lc;2286}2287else {2288if ((e.prev = hiTail) == null)2289hiHead = e;2290else2291hiTail.next = e;2292hiTail = e;2293++hc;2294}2295}22962297if (loHead != null) {2298if (lc <= UNTREEIFY_THRESHOLD)2299tab[index] = loHead.untreeify(map);2300else {2301tab[index] = loHead;2302if (hiHead != null) // (else is already treeified)2303loHead.treeify(tab);2304}2305}2306if (hiHead != null) {2307if (hc <= UNTREEIFY_THRESHOLD)2308tab[index + bit] = hiHead.untreeify(map);2309else {2310tab[index + bit] = hiHead;2311if (loHead != null)2312hiHead.treeify(tab);2313}2314}2315}23162317/* ------------------------------------------------------------ */2318// Red-black tree methods, all adapted from CLR23192320static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root,2321TreeNode<K,V> p) {2322TreeNode<K,V> r, pp, rl;2323if (p != null && (r = p.right) != null) {2324if ((rl = p.right = r.left) != null)2325rl.parent = p;2326if ((pp = r.parent = p.parent) == null)2327(root = r).red = false;2328else if (pp.left == p)2329pp.left = r;2330else2331pp.right = r;2332r.left = p;2333p.parent = r;2334}2335return root;2336}23372338static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root,2339TreeNode<K,V> p) {2340TreeNode<K,V> l, pp, lr;2341if (p != null && (l = p.left) != null) {2342if ((lr = p.left = l.right) != null)2343lr.parent = p;2344if ((pp = l.parent = p.parent) == null)2345(root = l).red = false;2346else if (pp.right == p)2347pp.right = l;2348else2349pp.left = l;2350l.right = p;2351p.parent = l;2352}2353return root;2354}23552356static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root,2357TreeNode<K,V> x) {2358x.red = true;2359for (TreeNode<K,V> xp, xpp, xppl, xppr;;) {2360if ((xp = x.parent) == null) {2361x.red = false;2362return x;2363}2364else if (!xp.red || (xpp = xp.parent) == null)2365return root;2366if (xp == (xppl = xpp.left)) {2367if ((xppr = xpp.right) != null && xppr.red) {2368xppr.red = false;2369xp.red = false;2370xpp.red = true;2371x = xpp;2372}2373else {2374if (x == xp.right) {2375root = rotateLeft(root, x = xp);2376xpp = (xp = x.parent) == null ? null : xp.parent;2377}2378if (xp != null) {2379xp.red = false;2380if (xpp != null) {2381xpp.red = true;2382root = rotateRight(root, xpp);2383}2384}2385}2386}2387else {2388if (xppl != null && xppl.red) {2389xppl.red = false;2390xp.red = false;2391xpp.red = true;2392x = xpp;2393}2394else {2395if (x == xp.left) {2396root = rotateRight(root, x = xp);2397xpp = (xp = x.parent) == null ? null : xp.parent;2398}2399if (xp != null) {2400xp.red = false;2401if (xpp != null) {2402xpp.red = true;2403root = rotateLeft(root, xpp);2404}2405}2406}2407}2408}2409}24102411static <K,V> TreeNode<K,V> balanceDeletion(TreeNode<K,V> root,2412TreeNode<K,V> x) {2413for (TreeNode<K,V> xp, xpl, xpr;;) {2414if (x == null || x == root)2415return root;2416else if ((xp = x.parent) == null) {2417x.red = false;2418return x;2419}2420else if (x.red) {2421x.red = false;2422return root;2423}2424else if ((xpl = xp.left) == x) {2425if ((xpr = xp.right) != null && xpr.red) {2426xpr.red = false;2427xp.red = true;2428root = rotateLeft(root, xp);2429xpr = (xp = x.parent) == null ? null : xp.right;2430}2431if (xpr == null)2432x = xp;2433else {2434TreeNode<K,V> sl = xpr.left, sr = xpr.right;2435if ((sr == null || !sr.red) &&2436(sl == null || !sl.red)) {2437xpr.red = true;2438x = xp;2439}2440else {2441if (sr == null || !sr.red) {2442if (sl != null)2443sl.red = false;2444xpr.red = true;2445root = rotateRight(root, xpr);2446xpr = (xp = x.parent) == null ?2447null : xp.right;2448}2449if (xpr != null) {2450xpr.red = (xp == null) ? false : xp.red;2451if ((sr = xpr.right) != null)2452sr.red = false;2453}2454if (xp != null) {2455xp.red = false;2456root = rotateLeft(root, xp);2457}2458x = root;2459}2460}2461}2462else { // symmetric2463if (xpl != null && xpl.red) {2464xpl.red = false;2465xp.red = true;2466root = rotateRight(root, xp);2467xpl = (xp = x.parent) == null ? null : xp.left;2468}2469if (xpl == null)2470x = xp;2471else {2472TreeNode<K,V> sl = xpl.left, sr = xpl.right;2473if ((sl == null || !sl.red) &&2474(sr == null || !sr.red)) {2475xpl.red = true;2476x = xp;2477}2478else {2479if (sl == null || !sl.red) {2480if (sr != null)2481sr.red = false;2482xpl.red = true;2483root = rotateLeft(root, xpl);2484xpl = (xp = x.parent) == null ?2485null : xp.left;2486}2487if (xpl != null) {2488xpl.red = (xp == null) ? false : xp.red;2489if ((sl = xpl.left) != null)2490sl.red = false;2491}2492if (xp != null) {2493xp.red = false;2494root = rotateRight(root, xp);2495}2496x = root;2497}2498}2499}2500}2501}25022503/**2504* Recursive invariant check2505*/2506static <K,V> boolean checkInvariants(TreeNode<K,V> t) {2507TreeNode<K,V> tp = t.parent, tl = t.left, tr = t.right,2508tb = t.prev, tn = (TreeNode<K,V>)t.next;2509if (tb != null && tb.next != t)2510return false;2511if (tn != null && tn.prev != t)2512return false;2513if (tp != null && t != tp.left && t != tp.right)2514return false;2515if (tl != null && (tl.parent != t || tl.hash > t.hash))2516return false;2517if (tr != null && (tr.parent != t || tr.hash < t.hash))2518return false;2519if (t.red && tl != null && tl.red && tr != null && tr.red)2520return false;2521if (tl != null && !checkInvariants(tl))2522return false;2523if (tr != null && !checkInvariants(tr))2524return false;2525return true;2526}2527}25282529}253025312532