• Content count

  • Joined

  • Last visited

  1. That's… really off the mark. I specifically said it doesn't work _without_ r=o. The following two queries (API key omitted, obviously) have old timestamps two weeks apart (the first is from 08/06, the second is from 07/25), but return the same set of latest items, the newest of which as of right now has the timestamp 1502607600 (so yesterday) and thus should not have been retrieved. https://www.inoreader.com/reader/api/0/stream/contents/feed/http://feeds.feedburner.com/CrackedRSS?AppId=1000000562&AppKey=<>&ot=1502000000 https://www.inoreader.com/reader/api/0/stream/contents/feed/http://feeds.feedburner.com/CrackedRSS?AppId=1000000562&AppKey=<>&ot=1501000000
  2. Let's say I want to grab the feed https://www.inoreader.com/reader/api/0/stream/contents/feed/http://feeds.feedburner.com/CrackedRSS . If I pass in ot parameter any past timestamp, I get the same set of latest items. If I pass in any future timestamp, I get a blank result. It seems to work correctly with r=o, but I need the newest first order. Am I missing something or this is indeed bugged? Related: is it possible to get next page of results in some more convenient way than grabbing the timestamp of the last item and passing it in ot?
  3. I extensively customize Inoreader with CSS, and one of those customizations is highlighting items with variously coloured borders depending on keywords in title and article content. This works perfectly in expanded and list views, but in card view content isn't present, which limits me to title-based highlighting. It would be extremely useful to me if instead of loading on clicking a card the content was always present and invisible. Otherwise, I'll be forced to butcher the expanded view into a ghetto-card view for certain feed URLs, which would be just extremely unpleasant.
  4. Jump to earliest unread article

    I wrote a userscript to do this exact thing: $(document).keydown(function (e) { switch (e.which) { case 13: // enter var next = $(".article_unreaded")[0]; next.parentNode.scrollTop = next.offsetTop; next.click(); break; default: return; // exit this handler for other keys } }); This scrolls to the first unread article and clicks it to mark it as read, so we can scroll to the next one later.
  5. No, that's not it. As I said in the crossed-out part, other feeds from this server were working and the server isn't accessible 24/7. The crawler log showed this error for 24+ hours straight. The only URL is unchanged, so I assume you meant XML. Here is a minimized version trimmed to three entries - first one displays as normal in Firefox's preview, second one contains the backspace, which breaks this item and all the following ones; removing the backspace shows all three items normally and allows the feed to be read by Inoreader. Posting a plaintext version is pointless, since the character gets stripped along the way.
  6. I have several feeds hosted on my server. Of them, exactly one (http://dariush.duckdns.org/dailypixiv.xml) broke between 1 and 2 days ago with the above error. The feed is accessible from the internet (via FeedValidator). This server is accesible daily between around 9.30 and 23.00 GMT. Edit: by the universal internet law of figuring any problem out five minutes after asking for help with it, I found the culprit: this feed had a backspace character (looks like 'BS' in text editor) in it. Removing it allowed Inoreader to fetch the feed. However, the error is still definitely not correct - even if it breaks the feed, this shouldn't have anything to do with route to host. Irrelevant to Inoreader, but why does it break the feed itself? Firefox RSS preview only shows items up to the one containing this character and FeedValidator fails to validate the feed.
  7. Allow to set a flag on folder 'Sequential' that will do two things: 1) Force manual sort on the folder; 2) When the folder is viewed, instead of mixing articles from all feeds they will be displayed in order - first all articles from the first feed, then a separator, then the second feed and so on. The views of every individual feed will be used, as opposed to the folder imposing the same view on all feeds inside.
  8. Ah, I see. Could this be made into an option or something, though? I don't even know how many of my feeds don't have pubDate and are thus broken for now and I'm perfectly fine with duplicates. I also strongly suspect I'm far from the only one who would prefer to get duplicate entries as opposed to not getting new ones at all.
  9. I am subscribed to this feed - http://dariush.duckdns.org/dailypixiv.xml. It is hosted locally, so it's often unavailable, but normally the crawler checks it when my computer is turned on and displays the updates as expected. However, after Inoreader came back online today, the crawler log started saying 'Success - Not Modified' despite there being new items in the feed and not displaying them. Another feed I am subscribed to, http://dariush.duckdns.org/pixiv.xml (NSFW), has the log look like this: there are several normal successes at the times when new items appear in the feed, which I can see when I open the feed itself;; however, the newest items in Inoreader's feed are dated with 1 day ago.
  10. Um, I didn't make it clear in the previous post, but the counter should reset to 0 once we get a non-unique update. Item date shouldn't matter at all. If it's absent, Inoreader takes 100 last articles in the order in which they are given in the feed. If the site changes ordering or formatting, then the counter will increase to 1 and return to 0 on the following crawl. Reactions to item updates will depend on whether Inoreader thinks it becomes a new article (IIRC, that happens if the item link changes): the only way that can matter is if we have a situtation where the counter should increase, but the feed is crawled at the exact moment this freshly updated item that Inoreader considers to be 'old' happens to be last in the list. All this unlikely coincidence would do is delay the warning by one or two crawls. If there's a temporary wave of updates, the warning will appear and then disappear when the counter resets after the wave is over and the crawls return to being non-unique.
  11. Oh well. Though I have to admit I don't see how my proposal would require 'a lot of additional hardware'. All you need to additionally do is store a single integer per feed - the number of times a unique set was returned for this feed. Then this counter is incremented whenever your software deduplicates entries (which it is already doing) and finds that there is nothing to dedupe. Alternately, if I misunderstood how your deduplication works, just check that the oldest entry in the new set is absent in the existing set, and then increment this counter if this is the case. Finally, show a warning to user if this counter is above a certain value. No additional infrastructure required.
  12. Sorry for the wordy title, but I couldn't find a better way to express it. I am subscribed to a very high-volume feed (1k+ updates per day) and Inoreader put its update speed at 30 minutes. However, the feed itself only returns 20 last entries, which means that a lot of updates 'slip through the cracks'. I noticed this myself and found a solution, but I still lost a lot of updates without noticing it. Thus, what I suggest is to warn the user when the crawler at its maximum non-boosted speed gets, let's say, three completely unique sets of items in a row when it crawls the feed. The reasoning is that if no items are leftover from the previous crawl, it's quite likely that there were some additional items between the last one from the previous crawl and the first one of the current one.
  13. Several card view changes

    Aaaand I've done what I wanted by the following bit of CSS: .article_tile_picture, .article_tile_content_wraper { background-size: contain !important; height: 100%; } .article_tile{ min-width: 0px !important; width: 150px !important; height: 150px !important; } .article_tile_footer, .article_tile_title { display:none !important; } (this is complemented by a Greasemonkey script that makes thumbnails clickable)
  14. Several card view changes

    That's exactly what I needed, thanks! (or at least one of those three things... )
  15. Several card view changes

    I installed it and it's mostly what I want, except for the following things: 1. I would like more tiles per row (maybe 8?), not less. I found no easy way to modify your code to do this, other than adding additional blocks for each column after line 87 ($('#reader_pane .article_card').each( function(index) {).2. 2. I would like it to replace card view, not expanded view. 3. Items in a row aren't aligned. Unfortunately, #2 is critical and thus it's not usable for me.