Did a bunch of stuff to inherit from JobIntentService and use enqueue(),
but doesn't work yet. OS is unable to bind, with this error:
09-21 17:20:51.678 3050 3050 W JobServiceContext: Time-out while trying to bind 2edee28 #u0a277/1111 org.eehouse.android.xw4dbg/org.eehouse.android.xw4.BTService, dropping.
Notifications don't work on Oreo without this change, which includes a
new Support Library in order to get NotificationChannel and creates and
uses that as docs describe. Requires that MinSDK be raised from 8 to 14,
which may lock some users out. It *should* be possible not to do this in
the fdroid variant since their app store doesn't requires SDK 26, but
I'll look at that later.
For some reason my laptop wouldn't build without this change. No idea
what happened to the newer version I was using or if the change
works (beyond compiling). Should be easy to find the change later if
it's a problem.
Some devices, including my Moto, are apparently calling server_do() more
than most. When the game's supposed to be asking the user to pick tiles
that resulted in stacked TilePickAlerts. The stack of these
sending (taken together) too many picked tiles to the game made it
crash. So modify server to have only one pending tile-pick request going
at a time. Because the server can't know when the user dismisses the
alert in Android and so won't post again, respond to the dismissal by
closing the game. Reopening will put it in a state where the tile picker
can get called again.
Logic error meant it was never sent. Now always send on receipt of a
well-formed invitation, even if e.g. the recipient's missing a wordlist
and the game can't be started immediately.
Had inadventently changed how NetLaunchInfo was transmitted, and crashed
on receiving from older builds. Fix to not crash, and then fix to send
and recieve in the old format.
So now all jni code uses a single dutil context, but also a single
mempool and jniutil instance instead of new instances of the latter two
per game and dict-iteration.
A number of jni calls were "stateless", which meant they allocated their
own vtmgr and mpool instances each time invoked. Instead invoke them
with the global jni closure and add to it vtmgr (already has mpool) and
use these instead of allocating/freeing each time. To make sure no race
conditions are introduced (mpool, though debug-only, is probably not
thread-safe), guard these new uses with an in-use flag. If that fires
I'll need a mutex or something.
Somehow I got a wordlist into a location different from what was
recorded in the DB table and since the delete command matched on
location as well as name it was never deleted (which meant the checksum
was never updated and so upgrading never seemed to succeed.) Removing
the match on location fixed that problem, and since I don't see any harm
in cacheing only one version of a wordlist will simply leave it that
way.
Did a bunch of cleanup as well.
I'm seeing several IllegalStateException crashes due to e.g. having an
alert posted when app's in the background. Need to fix them, but the
debug build crashing isn't helpful. Log.e() instead.
The pesky thing is back. When app's in the background with an
unconnected game open displaying the "resend/wait" alert and the game
connects get IllegalStateException because the fragment stack's being
modified after onSaveInstanceState() (or, because the dialog fragment,
saved as an instance variable in BoardDelegate, dates from an earlier
state. Anyway, catching and dropping the exception and elsewhere failing
silently to rebuild the alert seems to fix the problem, but the right
fix is likely different. I suspect hanging onto that iVar is wrong, and
that the dialog should go away when onStop() is called and then be
rebuilt later from saved state. But for now, not crashing is good.
Remove a bunch of duplicated code, replacing with implementation in
XWService. Fixes duplicate invitation and game opening policies being
slightly different.
Try some funky layout shite to get, within a horizontal linear layout,
the first text field trucated if necessary so that the second (holding
the score) can be fully displayed. Tested on exactly one emulator so
far.
Try some funky layout shite to get, within a horizontal linear layout,
the first text field trucated if necessary so that the second (holding
the score) can be fully displayed. Tested on exactly one emulator so
far.
I think there's a bug in Weblate because I've seen this before: where
the English provides only an <other> the translation comes back with
only a <one>. That's wrong. Try adding a <one> in the English case to
see if that makes a difference.
I think there's a bug in Weblate because I've seen this before: where
the English provides only an <other> the translation comes back with
only a <one>. That's wrong. Try adding a <one> in the English case to
see if that makes a difference.
Otherwise it takes too long to scroll if you have hundreds of
games. To make this work had to move scroller to left side of games
list display as otherwise the scroller steals events from the expander
thingies.
Likely because of something in the jni world unset per-player dict name
is empty string rather than null, so test for that too. Fixes dicts
popup in newly-created game have an empty first line " (in use)".
I like this better than the previous fix: rather than share a
thread->env map with the game world allocate a new one for each
iterator. This could cause problem if the iterator is used on threads
that don't currently call map_thread(), or if there are callbacks that
need to look up the env that I'm not aware of. Needs testing...
Stumbled on a NPE opening up the wordlist browser configuring the first
game on a new install. So now test for null there and init early if
necessary. Seems to work, and won't do anything in places were not
needed.
Remake the min and max spinners every time either value changes so they
can't be used to set nonsensical values. (Which leads to immediate
crashes.) I'm sure this wasn't always a problem, but...
As reported to google, dict iterator destruction was crashing due to a
race condition if it happened after a game using the same dict had been
closed since it needed a mapped env that the game closure would
remove. Fixed in two ways, one by adding the mapping prior to the code
that uses it (a common pattern: add happens many times, whenver it might
be needed, but remove only once), and second by passing env into the
code that was crashing.
The mapping stuff remains inherently racy and I'm not sure now how to
fix that. It depends on there being a place to unmap after which it's
guaranteed the mapped value won't be needed again. When two
objects (game and dict_iter in this case) map the same env/thread combo
there's a race.
Looks like the assertion was left in when adding support for dual-pane
mode, as all other onPosButton() implementations called super rather
than assert. Which this one does now too.
Getting ANRs because (I think) the main thread's waiting for the write
thread to die and now the write thread's doing a ton of work
sometimes. So move the threads into a standalone object that can be
allowed to die on its own time without anybody waiting.
I *think* the reason I'm occasionally seeing toasts about not finding a
move is that when the engine's interrupted by there being a UI event in
the queue that error is posted. Instead try posting only when at the end
of the search nothing's been found.
Having reconfigured to use non-existent relay port as a test of falling
back to the web apis, tweak stuff: send the packets that have been
accumulated when an EOQ is found (rather than dropping all of them
immediately) before exiting the write thread; and start the threads up
when posting a packet in case they aren't (they may not be when the post
happens via timer firing.)
Seemed to be causing ANRs. Integrate instead into outgoing message queue
by using poll(timeout) then checking for unack'd packets every time
through the loop (but not more than once/3 seconds or so.)
Presence of timestamp instead of a boolean determines whether packet
should next via Web. Timestamps might also allow to process a larger
number of unacked packets in a single timer fire....
Track ack'd and unack'd packets. When there are ten more of the latter,
skip the UDP-send step. This is probably not the algorithm I'll settle
on (an explicit PING to the relay over UDP might be simpler), but it's
simple and easy.
Send each packet via UDP if that's thought to be working (always is,
now) and start a 10-second timer. If it hasn't been ack'd by then,
resend via Web API. Tested by configuring to use a UDP socket that the
relay isn't listening on. Only problem is that the backoff timers are
broken: never stops sending every few seconds.
The plan's to use the native relay protocol first, then to fall back to
the slower but more reliable (esp. on paranoid/block-everything wifi
networks) webAPI. This is the name change without behavior
change (except that the native kill() to report deleted games is gone.)
When grouping to allow multiple packets per outbound API call I forgot
that some are there to mark the end-of-queue: can't be sent! Trying
caused a NPE. Now if any EOQ is found in the queue that batch is dropped
and the thread's exited.
Making the right_side elem match its parent height prevents the
lower-right region of game list items from falling through and
triggering a toggle-selection event.
Making the right_side elem match its parent height prevents the
lower-right region of game list items from falling through and
triggering a toggle-selection event.