SRS has serious problems (srsly)

I don’t know how much of this has to do with Beta issues, but I upgraded the app again yesterday and am running into all kinds of trouble. I always have trouble with the adjustment when I try the upgrade, but this time I was determined to just get used to it, so I’ve tried to put up with a lot of weird stuff in the beginning.

For instance, I started off with 800+ words in my study queue. All but less than 10 of them were tone reviews until I got the number down below 100. I figured there was some kind of adjustment taking place and that this was a one-off situation. So far, it seems to be: today my queue started at under 300, and the balance is much better.

The problem I’ve having now is that “got it” and “too easy” are both functionally useless. There is nothing I can do short of banning a word or character that will prevent it from popping up again and again in my queue. Worse, there’s no diversity in my studies: because of the frequent repetition, I’m only getting exposure to about 10 terms. Even if I skip through (and this is another problem, but I’ll post it in the Beta subforum later), marking at either 3 or 4, it just comes back. I don’t know if it’s subtracted from my study counter either, because it tends to get stuck, forcing me to go to the dashboard to get it to update.

I don’t even know if I’m going to be able to get through today’s study block and clear the queue down to zero before I give up and go back to the old app (yet again). I really would prefer not to, because the SRS/got it/easy functionality (or lack thereof) was driving me nuts there too, and I upgraded primarily so I could at least have the option to skip terms. At the same time, it seems that in upgrading the app, I’ve also entered a new situation where the SRS problems are magnified beyond belief.

To be clear, these aren’t all words that I’ve already gotten wrong today, or that I have a previous history of making mistakes with. A lot of these terms are new ones just being added to the queue. If I don’t mark them as “too easy” on their very first appearance, they’re on me like a social disease from that point after. And I’m noticing that a lot of ones that I DO mark as too easy from the get-go still show up again a few minutes later, especially after making a trip to the dashboard to get the counter to reset.

I don’t know how the SRS is supposed to work; I got the impression when I began working with Skritter a year or two ago that it was based on some iteration of Supermemo, like Anki. Here’s how I’d want it to work if everything were up to me: if a new term enters my queue and I get it right on the first try–got it, not too easy–then that’s it. I don’t see it again for the REST OF THE DAY. If I mark it as “too easy,” then it goes to the back of the line. All the way back. If I get a term wrong the first time, but get it right the next time (which should be at least 10-20 terms later, not almost immediately), then I should be done with it for the rest of the day, or maybe after another successful try. If it’s a repeat term that I’ve had trouble with in the past but I still get it right on the first try in a single day, DONE. The next day when it shows up can be determined by the larger history, i.e., if I’ve gotten it wrong 10x a day for the last month, then yeah, show it to me tomorrow to see if I still remember it, and then gradually lengthen the repetition space as long as there’s success.

I like the way Anki presents the grading options: 1) Again, 2) Hard 3) Good 4) Easy, with each option showing clearly how long a space will be inserted before the next repetition, with everything being a matter of days except “Again,” which is minutes. The trouble with Anki is that it always cycles cards in the same order, and I also think there’s some value in repeating terms a couple of extra times when they’ve been problematic in the recent past. Oh, and you can’t draw characters with it. That’s a big one.

If Skritter doesn’t/can’t/isn’t supposed to function along those lines, then can we at least have a button added that will banish a term for the rest of the day? Even skipping character drawing and marking them directly is burning up way too much of my study time, and severely setting back my larger overall goals and targets.

I hope this doesn’t come across as too demanding or fussy, or that I think it’s easy to solve all these problems and make everything perfect for everybody (and I especially hate making negative comparisons to other apps, so I wasn’t trying to do that with the Anki example, just trying to flesh out my idea of how SRS is supposed to work); it’s just that this is such a great educational client when things work right, and it gets tremendously frustrating how much these little issues can derail everything.

/rant. Hope there’s something of value to be gained from all that.

3 Likes

It does sound like something has gone wrong with the way the SRS is supposed to be working on your end. We’re going to discuss your feedback and I’ll post an update here!

1 Like

Meanwhile, my study list is currently stuck at 10 items. I’ve done probably 30+ items since hitting 10, but can’t get it to drop any lower. The study list is an ongoing issue like I described above, except now backing out to the dashboard no longer has any effect on resetting it. It drops sometimes, gets stuck sometimes or drops a backlog of items all at once; whatever it does, it never subtracts terms from the list in any fashion corresponding to when they’re drawn. I have never once seen the list drop by one right after I clear a single character; rather, it’s always dropping clusters of various size, or just not at all.

1 Like

One of the resolutions of the discussion is that the app will become less reliant on server calls and the due count and scheduling will become more reliable. Regarding your mention earlier of how the SRS is supposed to work, you can check out this page with some details: http://docs.skritter.com/article/51-spaced-repetition

Are you running into these same issues if studying on the beta web version (www.skritter.com) opposed to Android app?

1 Like

The SRS description in the link there sounds fine. The problem I’m running into is the spacing and repetition of terms as they appear in a single day while I’m trying to clear my items list and increase the rate of new terms added. Here’s an example of the discrepancy between the SRS description and how it actually works for me:

A grade of 3 (“got it”) gives an interval about 2.2 times as long as the one you just had, not the one you were scheduled for, if you studied it on time. If you remembered it while overdue, Skritter guesses that you knew it better than we’d thought, so the next interval is longer. For example, if an item was scheduled for one week, and you saw it after two weeks, you’ll get it again in about 3.4 weeks.

In reality, if I draw a character correctly on the first try and proceed with it marked as “got it,” I will see it again within 30 seconds to about 2 minutes’ time, for probably at least 4 more times. Marking it as “too easy” increases the interval somewhat, but at present I’m still guaranteed to see it later that same day, usually before 10 minutes is up–and that’s only if I mark it as “too easy” the first time it appears. Otherwise, it comes up repeatedly, with little interval in between.

I have noticed when pulling the info on a specific term that it often says it was last shown a certain number of days or weeks ago, but my impression–untested, since it would require more time and focus than I have–is that my current study queue is largely comprised of terms I saw the previous day, with about 3-5 new terms added in.

Whatever the case, my progress over the last 2-3 months has largely come to a standstill. Looking over my stats on the legacy page, you can see pretty clearly the trend: in order to keep progress going in May, I had to expend twice as much time studying. Most of that was going around in circles with the app making draw and redraw and redraw again the same terms, no matter how I marked them by difficulty. Switching to the beta app has allowed me to at least skip all the extra drawing, but it’s introduced its own set of issues in terms of what I’m presented with for study. This month, I’m wiped out, and you can see that too. My study time is back down to levels that I’d maintained prior to May with consistent gains, but now my progress has flatlined.

This is getting to be a serious problem. I was supposed to be in review mode 2 months ago, and spending the bulk of my study time reading and using authentic media, but instead I’m still chasing my tail with 18% of my last study list hovering out of reach. I don’t know what happened, but something big occurred in about April/May that wrecked everything. Maybe that tracks with some identifiable internal change you made at the time. I hope so, because as things are, the app isn’t much use to me as a learning tool anymore: it’s draining time and mental energy from me, while showing little result for the effort.

As to the website, I recall the last time I did a study session there that I was able to go 10-15 minutes without seeing a single repeat term. Otoh there are still major discrepancies with the study items list there–I just visited to check my stats before writing this reply, and even after the recent fixes after this week’s meltdown, the 2.0 website is showing 2013 items due while the beat app tells me it’s 87. Of course, 87 items on the app probably equates to about 2000 actual views, if it’s even possible to get the list cleared to 0, but on the front end at least, the reporting is inconsistent between UIs.

I’d spend more time studying at the website if it weren’t for hardware limitations. I can only squeeze out about 3 items per minute using a mouse, and my writing pad is really ineffective with short strokes. I don’t guess you guys have a recommendation for a good pad for character study that comes in under $100, do you? I was thinking about opening a thread about it…

Was this the 1.0 or 2.0 site? I noticed the screenshot is of 1.0, so I thought I should make sure which version!

These numbers should closely match. I’ll ask @Michael if he can run a script to make sure the values are matching up.

While that’s totally not a solution to the problem, which is making sure the mobile app behaves the same way as the web version, I do have a recommendation for a nice writing tablet under $100: https://www.amazon.com/Wacom-Bamboo-CTL471-Tablet-Black/dp/B00EVOXM3S/ref=sr_1_6?ie=UTF8&qid=1498512537&sr=8-6&keywords=Wacom+bamboo+pen

It sounds like what’s happening is the reviews aren’t properly saving which is why they keep cycling. The info showing that it was last reviewed days/weeks/months ago versus sometime that same day is the tip-off. Just to make absolutely sure, would you be able to confirm you’re running the latest build of the beta app?

I meant the 2.0 website. I took the screen grab from the old site because it was easier to find the stats I was looking for there.

I found out that the website was set to show definition and other reviews, whereas I only ever use Skritter for drawing and tones. I didn’t realize it was possible to have different settings across UIs. Anyway, now they match up.

I think I’ve found something out about this. I was using the toggle in the lower right-hand corner to mark familiar items as “got it” or “too easy” rather than tapping through (that is, tapping the middle right of the screen to advance to the next character) to avoid drawing them, because when you tap through you don’t have the option to mark items according to difficulty. And it appears that when you do that, the app doesn’t count it at all and simply re-presents items in an endless loop until you either deal with them in some other way, or just go insane. I went through about 6-7 cycles of (I think) the same 30 items presented in the same order over and over, and then just started tapping the right middle and advancing that way, and then finally the counter started dropping.

Also I suspect that even drawing characters and then marking them as “too easy” either fails to register at all, or has no stronger an effect than just tapping through. I can still have a brand-new term introduced into the list, draw it and mark it “too easy,” and see it come back up in (generally) less than 5 minutes. As near as I can tell, it only re-appears once, maybe twice, but marking “got it” has exactly the same effect. I have to do every character a minimum of twice, no matter how I mark it.

The counter is still messed up, too. I can get it down to about 10 or slightly under, and then it will barely budge. It takes dozens of correct responses to get the counter to drop only 1 or 2 items. I can’t properly express what a tremendous psychological demotivator it is to have that number just hovering up there without responding to your efforts.

I’ve chosen to ignore it. My approach at present is to tackle my morning queue and get it down to the point where the counter hangs, and then just tap through everything as fast as I can until it clears. It’s not ideal, but these are all terms I’ve seen before, and god knows they’re going to be right back in my queue tomorrow anyway. Then I manually add my 100 terms, get through them, and then check the counter once or twice throughout the day to keep it from getting out of control.

Like I said, not ideal, but I’m done wasting time with endless repetitions of review items I can’t clear from my queue. I think I’m easily saving 80% in wasted time this way. Eventually, these issues will be sorted out and the study list will function normally, but until then I’ve got a way to get through the next few weeks without going all the way nuts while getting back on track with my larger overall study objectives.

It took me a while to find it, but I’m using app version 2.3.0. I couldn’t find a way to look for updates.

i think I’ve figured out a major part of this problem. I do the bulk of my Skrittering once a day, but terms are periodically added in small (or lately, huge) chunks throughout the day, in some sort of “cohort” arrangement where items scheduled to appear on the same day show up together.

The thing is, if I have an item scheduled for review, it will appear in my list with the other terms from that day, but if I don’t clear it pretty soon, after a period of time that I haven’t determined yet, that same group of words will be re-added to the list, and in this way multiple ensuing “layers” of review terms will be tacked onto my list until 24 hours later, when I start my normal study time.

At this point, items added to the list are solid, meaning I have to clear them manually by either drawing them correctly or skipping through. If I get a term right on the first try, my expectation from SRS is that I don’t have to see that term again until the next spaced repetition interval (days or weeks hence), but because it’s been loaded multiple times during my inactive portion of the day, that means I may run across it in my queue 1-4 more times, no matter how I mark it. I don’t think I ever get “one-offs;” every item that comes through my queue shows up at least twice, which means my list is potentially inflated by at least 200%.

Again, not a terribly big deal unless you study large lists like I do. I’ve got about 14,000 terms in my review list, and I need all of them there. But I also need some efficiency in my study. Lately my list is swelling by about 350 terms per day, which is more than i can keep up with. And the worst part of it is that easily half of it is bloat.

Now, I’ve noticed that if I get a term wrong, then it will be inserted into my queue dynamically while I’m clearing the rest of the items from the day before.

Why can’t we have a similar function where redundant entries are removed automatically from the list if they’re answered correctly on the first try? As frequently as the 2.0 app checks the database to update study items, it seems like this should be possible.

For that matter, given that Skritter takes such frequent pauses to report/update (I assume that’s what it’s doing), what’s the necessity of having it silently add items to the study list when it’s not being used? Why put items in there more than once in between sessions when it could simply be added on the fly when active? I understand the value of having a clear number of items for review when beginning a study session, but when half of those numbers are predicated on the expectation that you’re going to get them wrong and will need repeated practice, that’s not a very good solution either.

Anyway, that’s a big part of the problem. On the other side of it, I’m still seeing items I’ve marked as 3 or 4 still turning up on the next day, and that shouldn’t be happening at all. I don’t know how much of this is happening; I doubt it’s every term because if that were the case, my list would be continually expanding (although lately, it sort of seems to be…but not that bad yet). But it’s real.

I really wish something could be done to address this, especially on days like today when I’m unable to use the app due to some database error or other, and seeing my items due expand by 200-800 items per day. I’m now sitting at over 1000 items due, and I know that when I’m eventually able to use the app again, I’m going to be seeing some of these items 4 times or more each, but it’ll take me 4-6 hours to clear my queue back down to zero, even if I answer them correctly the first time or mark them as “too easy.”

The whole point of SRS is to make study more efficient, and this is the exact opposite of that. There is no reason on earth to require multiple reviews of an item that can be correctly answered on the first attempt.

I can understand the need to compromise between having an accurate counter for items due and allowing for the possibility that some items will need repeated reviews in order to maximize retention.

I think I have a solution for this: allow only one instance of each vocabulary term to be added to the items due list, or at the very most, only once per day. It would be ideal to have only one instance of an item in the list period, in consideration for circumstances that occasionally require a skipped or incomplete day of reviews, but maybe that would prove too difficult to encode consistently; not my field, obviously. If an item is answered correctly on the first attempt, the due list drops by one. If not, the count stays static, and depending on SRS calculations is not reduced until that character’s repetitions are finished for the day.

I have also complained in the past about the items due list not dropping, but that was because it never dropped at all, right answer or not. It’s an entirely different situation when there’s some kind of logic in determining how the list drops. As long as that’s visibly operating, then it’s not frustrating, or not beyond whatever frustration arises when one can’t remember an oft-forgotten character.

Could somebody at least tell me if this is being worked on, or if the consensus is that the system is fine as it is? I need to know whether I can expect this to be fixed eventually.

Thanks

We are still working on the issue in general and making adjustments when we are able to consistently reproduce various scenarios. The main challenge is that this issue isn’t necessarily directly related to our SRS system and is often account specific in nature. The recent website update today could possibly address some of these issues.

The repetition issue appears significantly worse now. I’m seeing rotating blocks of about 5 terms, after which the app churns for a few seconds to process or check in with the server, and then a new block starts up which includes at least two of the terms I just cleared. With my queue currently sitting at just under 1500 terms after the weekend backlog, that means I have to prepare myself to endure 300 such blocks in order to get the list cleared back down to zero.

I don’t think I can face that today. I appreciate that solutions aren’t easy or obvious, but good lord am I burned out on this issue. Hope something helpful and relatively easily implemented emerges soon.

This is interesting.

When the Skritter queue becomes long, it takes a lot of effort to clear, and the last thing you want is extra repetition. I thought that I was seeing the same sort of problem, but during review of 73 characters this afternoon, there wasn’t a single repeat. Do the repeats only happen when the queue is long? Under some other condition? Did Skritter change to fix this problem.

It might make a difference which client you’re using. I only use the Android app, but I’ve noticed less repetition (in the past) on the web versions. It also seems to be an individual problem, like the “fetching next” loop, which looks like only a severe problem for a few people, requiring spot fixes.

Whatever it is, it’s worse than I reported yesterday: every third term is a repeat, as I found out immediately after posting. I don’t even have enough time for the character image to clear off my retinas before I’m seeing it again, must less time to forget it.

Is this repetition issue any kind of priority right now? I’d like to check the app periodically to see if anything’s improved, but with my study list backed up multiple thousands of terms, I know I’d have to clear them all out before even seeing how the new items are being added. I’ve taken about a week off so far, and if I might as well snooze for the next few months, this’d be a good time to know.

If changes have been pushed through recently, could I get my queue cleared on the server side? Right now I’m showing just under 4200 items. Hey! Is it possible to check how many of those are unique terms? I’d be willing to make a friendly wager that there are no more than 800 unique terms in the list.

It is always something we’re looking into. It’s a bit hard to define as SRS has, by definition, repetition and Skritter doesn’t cut you off like Anki. However the next update adds back in auto-adding, which should improve things for some accounts. My main account has a couple thousand reviews stacked up (I justify it as necessary to have at least one backed up account for testing :stuck_out_tongue:), but I’m able to move through reviews pretty well when I put time in.

We’re very confident in the scheduler and the underlying data of the system itself, so if something else is going wrong, it’s likely in how we’re retrieving the data. So we’ve done testing like recording all prompts we see in an excel spreadsheet, comparing them to expected database values, and then looking into possible reasons for mismatches. We’ve played around with database and API sync values so that they happen more frequently which seems to have decreased these repetitions.

There are a couple points that we still need to explore deeper, but my feeling is that it’s not a fundamental issue, but rather some sort of mismatch with how we’re feeding the data into the client, like we don’t have a hose somewhere quite screwed on all the way. And given SRS is complex and values always change (especially as you have to study your queue to find problems), it’s hard to pinpoint one place in one go. But rest assured, we are still working on it.

Thanks, please keep working on this one. It may not be a fundamental problem, but it is still a real problem when it happens.

Like hz, I use the Android App, but I haven’t been seeing much repetition recently. I’ve slacked off on adding new words, and my queue has fewer than 200 words. During review, occasionally the app pauses and spins for a few seconds, and then it repeats the exact same card. Since it only happens 0 to 2 times per review session, it doesn’t bother me, but maybe these are clues as to what is going on.

That’s similar to what mine does. I have blocks of terms that pass without repetition, then at the end of it, 1-3 terms repeat, and then the app pauses and spins. The variable is the size of the block: I haven’t counted out the range, but offhand I’d guess it goes from 3-10 terms. Sometimes the app checks in (spins) every other term, sometimes it takes a while longer, but it never goes above about 10 before I see repetition.

And my queue is up around 5500 now, and for the last 24-36 hours the app has been stuck in another “fetching next” loop, so it’s completely unusable. I have serious doubts that I’ll ever get it cleared now.

Ugh… I’m new to Skritter but user hz is describing exactly what I’m experiencing with the Android app in ‘beta’. Items are stuck in a loop with the app ignoring my ‘3’ or ‘4’ input. Now that I have a decent size back catalog (191 words, 51 characters) Skritter is ineffectual for learning new items. After a single practice opportunity new items don’t appear for 10 or 20 minutes. This does little to help me encode and leads to frustration when a second glimpse finally appears long after the initial impression is gone from my working memory.

Is there any reason Skritter is missing a separate learning mode for new items similar to Memrise? It seems this by itself would cut down on the issue with reviews by more appropriately determining SRS intervals.

I’m only using Skritter for the writing practice if it makes any difference. Even there with the kana I’m finding major inconsistency in input recognition. Some characters accept a loose approximation (emphasizing the kanji knowledge as I would hope). Others requires a punishing exactitude during input (way over relying on fine-motor ability) that creates a road block to any flow of study. By itself this might not be concerning, but to discover that the input issue persists two years after this discussion thread seems a major red flag - “wrong” strokes

I admit being a bit in shock that the execution is so off in a product that is presented with a premium packaging (i.e. the website) at a premium price. At this level everything should just work and never require users to battle with the technology.

I just bought in tonight and yet within an hour I’m having an intense battle in my head about asking for a refund. The potential of this system is immense, but if it’s as half-baked and underdeveloped as I’m reading and experiencing than I don’t understand how a user base is retained. Perhaps it’s just a side-project? Why else do major usability issues exist for months at a time?

I’m sorry if my tone is negative, as I very much want to encourage the growth in potential of the app. Unfortunately I’m worried that making a commitment to battle Skriter’s flaws will impede my learning at a time when I’m otherwise on a roll in my Japanese acquisition.

At the end of the day all I really want is to spend an hour every night practicing with this - please help me to see some light at the end of the tunnel.

2 Likes

Switched back to the legacy app today, and I have to say I’d forgotten how much better it functions. I must’ve run through 50 characters straight without seeing a single repetition or having the app pause to reconnect to the server a single time, and when I did finally see repetitions it was (probably) because they were characters I missed earlier in the session.

It does suck having to draw every single character now without the option to skip the easy ones, but since it’s been so long since I’ve had a functional app that didn’t drive me nuts after 10 minutes, I could probably use the extra practice now anyway.

If anyone’s having the same problems I’ve described here, I’d recommend switching back. Far from ideal, but a big step up. Or so far today. Hope my good results hold up.

2 Likes

Switched back again to legacy after another attempt with beta. There are improvements, and I miss the functionality of being able to look up a word’s frequency and examples of usage, but it was repeating the old problem of getting stuck in micro loops of 1-3 characters and presenting them over and over again in order. Also, way more repetition.

That said, even with the old app there are still serious problems with the study order, and in the last few days I’ve found a perfect example of this. The tone review for 當 keeps popping up in my feed, not every day necessarily, but definitely at least every other day. This particular character was beyond familiar to me before I ever even heard of skritter, so it’s extremely unlikely that I’ve ever marked it wrong. I should be seeing it once every couple of years. Instead, I’ve seen it probably at least 5 times since Monday, because that’s the other thing, I never see a term only once, no matter how I mark it.

Anyway, I was thinking a specific example might provide a means for tracking down the problem, and that it might be worth passing along.

1 Like