Conversation
libkernel/src/sync/rwlock.rs
Outdated
| struct RwlockState { | ||
| is_locked: Option<RwlockLockStateInner>, | ||
| read_waiters: VecDeque<Waker>, | ||
| write_waiters: VecDeque<Waker>, |
There was a problem hiding this comment.
Any reason a WakerSet wasn't used here?
Also, do we need to distringuish between readers and writers?
There was a problem hiding this comment.
I copied it over from the mutex code, perhaps that has to be changed as well.
Ideally when choosing what to wake, we would wake up all the readers or a single writer.
There was a problem hiding this comment.
Hm, interesting. I'll take a look at the mutex code and modify that I think. I suspect that if a lock is heavily contended, doing this check every time is worse than a spurious wake up.
There was a problem hiding this comment.
Ah ok, I see why a VecDeque is most efficient. I think for fairness sake we do FIFO, but WakerSet uses a BTree internally.
libkernel/src/sync/rwlock.rs
Outdated
| }) | ||
| } | ||
| Some(RwlockLockStateInner::Write) => { | ||
| if state.read_waiters.iter().all(|w| !w.will_wake(cx.waker())) { |
There was a problem hiding this comment.
Any reason we do this check? I only ask as I don't do this in any other sync primitives, Mutex being the main example since it's async. I suppose it's a matter of weighing up the small likelyhood of a spurious wakeup vs iterating through this list every time the lock is contended?
There was a problem hiding this comment.
I got this from the mutex code: https://github.com/hexagonal-sun/moss-kernel/blame/96fe0378b7c183aebb4ba27743ba2e9843fcdd8a/libkernel/src/sync/mutex.rs#L96C30-L96C70. I actually have no idea whether we should do this or not, I suppose it is a big performance hit though.
There was a problem hiding this comment.
Interesting. Maybe I need to go back and revisit the Mutex code. That seems like it would be a lot of wasted cycles for contended lock.
4978fca to
e3c8f6e
Compare
No description provided.