• tguillemin

    In order to have a river-style (Dave Winer) feed about book reviews in some great newspapers (NYT, The Economist, Le Monde, Japan Times), I created a RSSMix feed of the 4 and "borrowed" a script from : [http://www.idiotinside.com/2017/06/08/parse-rss-feed-with-python/](Parse RSS feed with Python)

    Here is the script :

    # coding: utf-8
    import os
    import sys
    import feedparser
    import console
    #source : http://www.idiotinside.com/2017/06/08/parse-rss-feed-with-python/
    feed = feedparser.parse("http://www.rssmix.com/u/8265752/rss.xml")
    # RSSmix of Books reviews from : NYT, TE, LM, JT
    feed_title = feed['feed']['title']
    feed_entries = feed.entries
    for entry in feed.entries:
        article_title = entry.title
        article_link = entry.link
        article_published_at = entry.published # Unicode string
        article_published_at_parsed = entry.published_parsed # Time object
        article_description = entry.description
        article_summary = entry.summary
        #article_tags = entry.tags.label    <--------- PB
        print ("{}".format(article_title))
        print ("{}".format(article_published_at))
        print ("{}".format(article_link))
        print ("{}".format(article_summary))
        #print ("{}".format(article_tags))   <--------- PB
        print (" ")
        print ("....................")
        print (" ")
    file_name = os.path.basename(sys.argv[0])

    All in all, it works.
    I nevertheless encounter a few problems :

    • I would like to position the page at the most recent feed (top of the output), whereas the script positions it at the bottom
    • I cannot figure out how to grab the entries' tags which would allow me to "filter" some entries
    • It seems that the output keeps on growing… How do I eliminate entries e.g. older than 30 days ?

    Thanks in advance for your help.

    posted in Pythonista read more
  • tguillemin

    @dgelessus : Thank you for your answer. which is what I had (approximately) figured.

    How is it possible to make to @olemoritz the suggestion/request/wish to "unleash" the power of regex in that specific action (and elsewhere, of course, if needed) ?

    This would make the irreplaceable Editorial even better.

    Thanks again

    posted in Editorial read more
  • tguillemin

    BTW, I also tried this simpler solution :


    which also works in Regex101, but, alas, not in the "Fold Lines Containing" action…

    posted in Editorial read more
  • tguillemin

    Do you mean \\n ?


    I tried it, but it does not work.

    Thank you anyway

    posted in Editorial read more
  • tguillemin

    I have the following text :

    This is paragraph #1   
    [comment]: # (London, Paris)   
    This is paragraph #2   
    [comment]: # (Paris, Berlin)   
    This is paragraph #3   
    [comment]: # (London, Berlin)   
    This is paragraph #4 
    [comment]: # (Paris)
    This   is paragraph #5   
    [comment]: # (Berlin)

    I want to match all the lines containing Berlin AND for each match, the line before, in order to fold all the lines NOT containing this pattern :

    This is paragraph #2   
    [comment]: # (Paris, Berlin)   
    This is paragraph #3   
    [comment]: # (London, Berlin)   
    This   is paragraph #5   
    [comment]: # (Berlin)

    I came up with this solution

    which works under Python in Regex101 and which you can find at 
    [https://regex101.com/r/jXD7gw/3](link url)
    But when I try it in Editorial, it fails (simple workflow with FOLD LINES CONTAINING… with that regex and, of course the INVERT button on.
    Where did I go wrong ?
    Thanks in advance

    posted in Editorial read more
  • tguillemin


    I have the feeling, after using your VERY useful workflow, that we answer 2 different cases :

    • my solution : if the contents of a fold contain a specific pattern (and there can be as many different patterns as there are different words in the text), then keep open (unfolded) the fold with its contents
    • your solution : if a pattern is found, and this pattern is bound to be more specific (@something, or as I tried #\d{3}, etc...), keep the fold unfolded with its contents wether the pattern is found in the header or found in the contents of the fold

    My conclusion - and I am very grateful for your answer - is that these 2 solutions are complementary, for my specific uses anyway.

    Thanks again for your help

    posted in Editorial read more
  • tguillemin

    Very glad I could be of any help to you..

    But, you know, I think both of us should above all thank omz for this very remarkable software…

    Thanks again for your answer

    posted in Editorial read more
  • tguillemin

    First of all, thanks for your answer.
    I had indeed moved on, because I had found a solution suited for my needs :


    As you can see (I hope you will be able to read it : it is the first time I share a workflow), this contraption :

    • is in French
    • includes a Critic Markup Title hypothesis (you may set it aside)

    I suppose you can turn this in a real Rube Goldberg by introducing the choice of 2 patterns, which would allow to choose :

    • A
    • NOT A
    • A OR B
    • A AND B
    • A AND NOT B
    • NOT (A OR B)
    • NOT (A AND B)

    but then I decided to Keep It Simple.

    Thanks again

    posted in Editorial read more
  • tguillemin

    I encounter 2 different placeholder labels' position :

    • in lieu of the folded lines, i.e. under the "folding" header
    • at the end of the "folding" header

    This second type seems to have 2 advantages : it is more compact, and it allows to move up/down the block (header and folded lines) in one step.

    Is it possible to choose between those 2 configurations ?


    posted in Editorial read more
  • tguillemin

    @ccc said:

    lines = '\n'.join(line[:1].title() + line[1:] for line in lines.splitlines()) + '\n'

    I had not tried it with blank lines (in my file, it does not occur)
    Nevertheless, I tried your solution - successfully - after inserting those blank lines.

    Thank you. That will most certainly be useful some day…!

    posted in Editorial read more

Internal error.

Oops! Looks like something went wrong!