int bch2_check_topology(struct bch_fs *); int bch2_check_allocations(struct bch_fs *);
/* * For concurrent mark and sweep (with other index updates), we define a total * ordering of _all_ references GC walks: * * Note that some references will have the same GC position as others - e.g. * everything within the same btree node; in those cases we're relying on * whatever locking exists for where those references live, i.e. the write lock * on a btree node. * * That locking is also required to ensure GC doesn't pass the updater in * between the updater adding/removing the reference and updating the GC marks; * without that, we would at best double count sometimes. * * That part is important - whenever calling bch2_mark_pointers(), a lock _must_ * be held that prevents GC from passing the position the updater is at. * * (What about the start of gc, when we're clearing all the marks? GC clears the * mark with the gc pos seqlock held, and bch_mark_bucket checks against the gc * position inside its cmpxchg loop, so crap magically works).
*/
/* Position of (the start of) a gc phase: */ staticinlinestruct gc_pos gc_phase(enum gc_phase phase)
{ return (struct gc_pos) { .phase = phase, };
}
Die Informationen auf dieser Webseite wurden
nach bestem Wissen sorgfältig zusammengestellt. Es wird jedoch weder Vollständigkeit, noch Richtigkeit,
noch Qualität der bereit gestellten Informationen zugesichert.
Bemerkung:
Die farbliche Syntaxdarstellung und die Messung sind noch experimentell.