Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
W
wine-fonts
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Registry
Registry
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Aleksandr Isakov
wine-fonts
Commits
4c130eb8
Commit
4c130eb8
authored
Jun 10, 2018
by
Zebediah Figura
Committed by
Vitaly Lipatov
Jul 30, 2022
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
esync: Update README.
parent
71daaea6
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
17 additions
and
31 deletions
+17
-31
README.esync
README.esync
+17
-31
No files found.
README.esync
View file @
4c130eb8
...
...
@@ -88,32 +88,23 @@ also kind of ugly basically because we have to wait on both X11's fd and the
messages. The upshot about the whole thing was that races are basically
impossible, since a thread can only wait on its own queue.
I had kind of figured that APCs just wouldn't work, but then poll() spat EINTR
at me and I realized that this wasn't necessarily true. It seems that the
server will suspend a thread when trying to deliver a system APC to a thread
that's not waiting, and since the server has no idea that we're waiting it
just suspends us. This of course interrupts poll(), which complains at us, and
it turns out that just returning STATUS_USER_APC in that case is enough to
make rpcrt4 happy.
System APCs already work, since the server will forcibly suspend a thread if
it's not already waiting, and so we just need to check for EINTR from
poll(). User APCs and alertable waits are implemented in a similar style to
message queues (well, sort of): whenever someone executes an alertable wait,
we add an additional eventfd to the list, which the server signals when an APC
arrives. If that eventfd gets signaled, we hand it off to the server to take
care of, and return STATUS_USER_APC.
Originally I kept the volatile state of semaphores and mutexes inside a
variable local to the handle, with the knowledge that this would break if
someone tried to open the handle elsewhere or duplicate it. It did, and so now
this state is stored inside shared memory. This is of the POSIX variety, is
allocated by the server (but never mapped there) and lives under the path
"/wine-esync".
There are a couple things that this infrastructure can't handle, although
surprisingly there aren't that many. In particular:
* We can't return the previous count on a semaphore, since we have no way to
query the count on a semaphore through eventfd. Currently the code lies and
returns 1 every time. We can make this work (in a single process, or [if
necessary] in multiple processes through shared memory) by keeping a count
locally. We can't guarantee that it's the exact count at the moment the
semaphore was released, but I guess any program that runs into that race
shouldn't be depending on that fact anyway.
* Similarly, we can't enforce the maximum count on a semaphore, since we have
no way to get the current count and subsequently compare it with the
maximum.
* We can't use NtQueryMutant to get the mutant's owner or count if it lives in
a different process. If necessary we can use shared memory to make this
work, I guess, but see below.
* User APCs don't work. However, it's not impossible to make them work; in
particular I think this could be relatively easily implemented by waiting on
another internal file descriptor when we execute an alertable wait.
* Implementing wait-all, i.e. WaitForMultipleObjects(..., TRUE, ...), is not
exactly possible the way we'd like it to be possible. In theory that
function should wait until it knows all objects are available, then grab
...
...
@@ -144,18 +135,13 @@ surprisingly there aren't that many. In particular:
There are some things that are perfectly implementable but that I just haven't
done yet:
* NtOpen* (aka Open*). This is just a matter of adding another open_esync
request analogous to those for other server primitives.
* NtQuery*. This can be done to some degree (the difficulties are outlined
above). That said, these APIs aren't exposed through kernel32 in any way, so
* NtQuery*. That said, these APIs aren't exposed through kernel32 in any way, so
I doubt anyone is going to be using them.
* SignalObjectAndWait(). The server combines this into a single operation, but
according to MSDN it doesn't need to be atomic, so we can just signal the
appropriate object and wait, and woe betide anyone who gets in the way of
those two operations.
* Other synchronizable server primitives. It's unlikely we'll need any of
these, except perhaps named pipes (which would honestly be rather difficult)
and (maybe) timers.
* Access masks. We'd need to store these inside ntdll, and validate them when
someone tries to execute esync operations.
This patchset was inspired by Daniel Santos' "hybrid synchronization"
patchset. My idea was to create a framework whereby even contended waits could
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment