Wrong Week to try the Give the Beta Another Try

I got the monthly email and thought it sounded like maybe it was ready to try again. Holy god, I don’t know what you guys did to it, but literally nothing works now.

Using the legacy app, I had finally worked my queue down to zero from 6500+ a few weeks ago, and as soon as I upgraded, the new one told me I had 660 words in the queue. I loaded it up and it gave me 4 terms, stopped to churn for 20 seconds each time I got through them, and then repeated the same 4 terms again. I closed the app and restarted, and it said 567 items in the queue, gave me the same 4 terms and churned for 20 seconds. Later I reopened the app and it said 500-something items, and got stuck fetching next/updating cache, and that’s where I left it.

Logged into the legacy website to see if my queue was still at zero, but no, now it’s up to 400+ items, and one of my lists says it has updated terms and the progress bar seems to indicate about 5% more terms that I haven’t covered now, although the details box shows the last updates were at least 2 years ago. The 2.0 website loads, although the Android app is stuck fetching.

I mean, good lord. I expected a couple of snags, but this is a total disaster. Are you not getting similar feedback from elsewhere? Guess I’m going back to the old app again.

There must be something acting up with your account, it sounds like you’ve been running into nothing but problems. The feedback has been mixed, but mostly positive. Running into problems is obviously not a good experience, and you’ve been running into more than just a few snags. I’m pinging the developers to see if they might be able to spot any clues to why you’ve been running into so many issues.

It started working later, but with ups and downs. I was able to get the queue cleared again, but there’s notably more repetition than with the old client, so even with the ability to skip repeated terms it still takes me a lot longer to get through the same number of items in the due list.

The website seems to present terms in a different order from the app: I logged in briefly there to run a few terms to see if that would clear out the cache (or whatever), and for 10-20 terms there was nothing but tone reviews. The app doesn’t do that, ie, it switches back and forth pretty consistently, but I’ve noticed in the past that the website (I’m talking 2.0 for everything here) tends to be 100% tones or runes for an extended period before switching (I only do tones and runes, not definitions or reading).

Later, I had one instance where the app did its churning thing, then presented a single term (can’t remember which one). I completed it successfully, and the app started churning again, after which it repeated the same item. This happens a lot, but usually there’s a sequence of about 3-4 items that get presented a second time after the app does its check-in with the server. This time, it came back yet again with the same single item, and then had to churn once more. First time I’ve seen that happen, 3 times with a single item.

Also, I added some new lists for the first time since switching to the beta app last year, and I was able to note a clear example how the SRS is definitely messed up: I got a new term (and I know it was new because the app produces a lot of fanfare now when a new term is introduced–a point I’ll address in a minute), 地球村, which I entered correctly and marked as easy. Didn’t see it the rest of the study session yesterday, but it was right back at the top of the line today when I started it up again. This cannot possibly be what “too easy” means. I shouldn’t have seen that term again for several weeks at least, if not months.

To the best of my observation, these issues don’t come up with the legacy app. As I mentioned above, it takes time getting through the daily due list because you can’t skip repeated terms, but even so, the beta adds so much extra BS that it ends up taking even longer. I don’t know why or how, but they seem to operate on entirely different algorithms in regard to calculating and presenting due items.

One more thing: is it really necessary to have a banner drop down every time a new item is added to the list? You have to actually click it away every time it happens. I have a really hard time understanding how anyone would find such a feature desirable, but for those of us who don’t, there should be an easy and intuitive way to prevent this from happening. I haven’t yet found a control for it in the app settings, but I assume it’s in there somewhere.

Thanks for the feedback, it does help us look at things with a critical eye and check out edge cases that might not get tested in the normal suites. The current production mobile apps work with a lot more data locally on the device (hence the horrendus sync the first time you start up), but since offline mode is still a work in progress on the Android/iOS betas, we’re constantly fetching data from the servers. In that sense, they function more like the legacy Skritter web app does. But that’s why the review order is different. Also the newer APIs the beta client uses fix a couple time offset bugs in the old ones, which is why the due count seems off by a few hours when switching back and forth.

We’re always looking at ways to improve the “review loops”. It’s hard since seeing an item multiple times is part of the learning and review process, and each account is different, so it ends up being a loosely-defined problem that doesn’t have a clear fix. However there are a few solid numeric indicators we look at to improve the situation, and it’s a work in progress.

A lot of the data loading is because you’ve got a lot of data (and therefore a lot to load and sort through), and the underlying legacy Skritter technology just wasn’t built to handle what we do in an efficient way. We’ve been quiet about it recently, but consistently working on a new API and data layer that when we roll out more completely should cause users to see speed increases of a few hundred percent in about every site operation. But it’s still a work in progress.

We’re going to improve the item added indicator and general app notifications on mobile.

Since you do consistently seem to have a lot of problems–do you mind if we clone your study data to make a test account (you won’t see any change or notice anything on your end, we’d just be copying what you’ve studied)?

Hey Michael, feel free to do anything you need with my study data that might be helpful. I appreciate the help.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.