In [1]:
!date # written on, coded on the day before
Sun Sep  8 07:00:59 CDT 2019
In [1]:
import feedparser
In [2]:
feed = feedparser.parse("https://www.reddit.com/r/DataHoarder.rss")
In [3]:
feed
Out[3]:
{'feed': {'tags': [{'term': 'DataHoarder',
    'scheme': None,
    'label': 'r/DataHoarder'}],
  'updated': '2019-09-08T00:31:53+00:00',
  'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=8, tm_hour=0, tm_min=31, tm_sec=53, tm_wday=6, tm_yday=251, tm_isdst=0),
  'icon': 'https://www.redditstatic.com/icon.png/',
  'id': 'https://www.reddit.com/r/DataHoarder.rss',
  'guidislink': True,
  'link': 'https://www.reddit.com/r/DataHoarder',
  'links': [{'rel': 'self',
    'href': 'https://www.reddit.com/r/DataHoarder.rss',
    'type': 'application/atom+xml'},
   {'rel': 'alternate',
    'href': 'https://www.reddit.com/r/DataHoarder',
    'type': 'text/html'}],
  'subtitle': 'This is a sub that aims at bringing data hoarders together to share their passion with like minded people.',
  'subtitle_detail': {'type': 'text/plain',
   'language': None,
   'base': 'https://www.reddit.com/r/DataHoarder.rss',
   'value': 'This is a sub that aims at bringing data hoarders together to share their passion with like minded people.'},
  'title': "It's A Digital Disease!",
  'title_detail': {'type': 'text/plain',
   'language': None,
   'base': 'https://www.reddit.com/r/DataHoarder.rss',
   'value': "It's A Digital Disease!"}},
 'entries': [{'authors': [{'name': '/u/naughtytroll',
     'href': 'https://www.reddit.com/user/naughtytroll'}],
   'author_detail': {'name': '/u/naughtytroll',
    'href': 'https://www.reddit.com/user/naughtytroll'},
   'href': 'https://www.reddit.com/user/naughtytroll',
   'author': '/u/naughtytroll',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>Hey, quick reminder, posting pictures of a bunch of HDDs you just bought ISN&#39;T interesting, it&#39;s boring and I&#39;m tired of seeing them all, I&#39;m tired of seeing &quot;Am I part of this now?&quot; or &quot;Am I doing this the right way?&quot; posts.</p> <p>It&#39;s not because you have 100TB free of storage on your server that you are a data hoarder, and there is no &quot;good way to do this&quot;.</p> <p>Data hoarding isn&#39;t about just buying $3000 worth of hard drives just for posting them here. What&#39;s interesting is what you do with your storage.</p> <p>If you just have 1TB of storage but you do something freakin&#39; cool with it, what you can share here is way more important than someone buying 30TB of storage and never post again here.</p> <p>Please, focus on what we love, the DATA, not the storage medium, please focus on projects, on archiving, on digital preservation.</p> <p>Thanks. Post <a href="https://old.reddit.com/r/DataHoarder/comments/9zxvf2/please_stop_posting_pictures_of_your_hard_drives/">inspired</a> by <a href="/u/Nooco24">u/Nooco24</a></p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/naughtytroll"> /u/naughtytroll </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/c3mahr/please_stop_posting_photos_of_your_hard_drives/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/c3mahr/please_stop_posting_photos_of_your_hard_drives/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>Hey, quick reminder, posting pictures of a bunch of HDDs you just bought ISN&#39;T interesting, it&#39;s boring and I&#39;m tired of seeing them all, I&#39;m tired of seeing &quot;Am I part of this now?&quot; or &quot;Am I doing this the right way?&quot; posts.</p> <p>It&#39;s not because you have 100TB free of storage on your server that you are a data hoarder, and there is no &quot;good way to do this&quot;.</p> <p>Data hoarding isn&#39;t about just buying $3000 worth of hard drives just for posting them here. What&#39;s interesting is what you do with your storage.</p> <p>If you just have 1TB of storage but you do something freakin&#39; cool with it, what you can share here is way more important than someone buying 30TB of storage and never post again here.</p> <p>Please, focus on what we love, the DATA, not the storage medium, please focus on projects, on archiving, on digital preservation.</p> <p>Thanks. Post <a href="https://old.reddit.com/r/DataHoarder/comments/9zxvf2/please_stop_posting_pictures_of_your_hard_drives/">inspired</a> by <a href="/u/Nooco24">u/Nooco24</a></p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/naughtytroll"> /u/naughtytroll </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/c3mahr/please_stop_posting_photos_of_your_hard_drives/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/c3mahr/please_stop_posting_photos_of_your_hard_drives/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_c3mahr',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/c3mahr/please_stop_posting_photos_of_your_hard_drives/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/c3mahr/please_stop_posting_photos_of_your_hard_drives/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-06-22T06:27:01+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=6, tm_mday=22, tm_hour=6, tm_min=27, tm_sec=1, tm_wday=5, tm_yday=173, tm_isdst=0),
   'title': 'Please stop posting photos of your hard drives.',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Please stop posting photos of your hard drives.'}},
  {'authors': [{'name': '/u/xqzc',
     'href': 'https://www.reddit.com/user/xqzc'}],
   'author_detail': {'name': '/u/xqzc',
    'href': 'https://www.reddit.com/user/xqzc'},
   'href': 'https://www.reddit.com/user/xqzc',
   'author': '/u/xqzc',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '&#32; submitted by &#32; <a href="https://www.reddit.com/user/xqzc"> /u/xqzc </a> <br/> <span><a href="https://abra.me/pan/mp4/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/cuybka/ive_saved_reddits_rpan_experiment_900_gb_3_months/">[comments]</a></span>'}],
   'summary': '&#32; submitted by &#32; <a href="https://www.reddit.com/user/xqzc"> /u/xqzc </a> <br/> <span><a href="https://abra.me/pan/mp4/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/cuybka/ive_saved_reddits_rpan_experiment_900_gb_3_months/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_cuybka',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/cuybka/ive_saved_reddits_rpan_experiment_900_gb_3_months/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/cuybka/ive_saved_reddits_rpan_experiment_900_gb_3_months/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-08-24T20:02:47+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=8, tm_mday=24, tm_hour=20, tm_min=2, tm_sec=47, tm_wday=5, tm_yday=236, tm_isdst=0),
   'title': "I've saved reddit's RPAN experiment: 900+ GB, 3 months of video. I would appreciate somebody mirroring it.",
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': "I've saved reddit's RPAN experiment: 900+ GB, 3 months of video. I would appreciate somebody mirroring it."}},
  {'authors': [{'name': '/u/InternetArchiver1',
     'href': 'https://www.reddit.com/user/InternetArchiver1'}],
   'author_detail': {'name': '/u/InternetArchiver1',
    'href': 'https://www.reddit.com/user/InternetArchiver1'},
   'href': 'https://www.reddit.com/user/InternetArchiver1',
   'author': '/u/InternetArchiver1',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<table> <tr><td> <a href="https://www.reddit.com/r/DataHoarder/comments/d0ucp6/all_donald_ducks_released_on_finland_will_be_free/"> <img src="https://b.thumbs.redditmedia.com/OBd7fD3O_IYrhi9fuYNvzN9Dh5a0r6kYt6TE4PE-tio.jpg" alt="All Donald Duck’s released on Finland will be free to read tomorrow. Someone archive them?" title="All Donald Duck’s released on Finland will be free to read tomorrow. Someone archive them?" /> </a> </td><td> &#32; submitted by &#32; <a href="https://www.reddit.com/user/InternetArchiver1"> /u/InternetArchiver1 </a> <br/> <span><a href="https://www.akuankka.fi/artikkeli/58/kaikki-suomessa-julkaistut-aku-ankat-luettavissa-ilmaiseksi-read-hour-sunnuntaina-8-9-">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0ucp6/all_donald_ducks_released_on_finland_will_be_free/">[comments]</a></span> </td></tr></table>'}],
   'summary': '<table> <tr><td> <a href="https://www.reddit.com/r/DataHoarder/comments/d0ucp6/all_donald_ducks_released_on_finland_will_be_free/"> <img src="https://b.thumbs.redditmedia.com/OBd7fD3O_IYrhi9fuYNvzN9Dh5a0r6kYt6TE4PE-tio.jpg" alt="All Donald Duck’s released on Finland will be free to read tomorrow. Someone archive them?" title="All Donald Duck’s released on Finland will be free to read tomorrow. Someone archive them?" /> </a> </td><td> &#32; submitted by &#32; <a href="https://www.reddit.com/user/InternetArchiver1"> /u/InternetArchiver1 </a> <br/> <span><a href="https://www.akuankka.fi/artikkeli/58/kaikki-suomessa-julkaistut-aku-ankat-luettavissa-ilmaiseksi-read-hour-sunnuntaina-8-9-">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0ucp6/all_donald_ducks_released_on_finland_will_be_free/">[comments]</a></span> </td></tr></table>',
   'id': 'https://www.reddit.com/r/t3_d0ucp6',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0ucp6/all_donald_ducks_released_on_finland_will_be_free/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0ucp6/all_donald_ducks_released_on_finland_will_be_free/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T10:21:45+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=10, tm_min=21, tm_sec=45, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'All Donald Duck’s released on Finland will be free to read tomorrow. Someone archive them?',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'All Donald Duck’s released on Finland will be free to read tomorrow. Someone archive them?'}},
  {'authors': [{'name': '/u/sachinhegde6',
     'href': 'https://www.reddit.com/user/sachinhegde6'}],
   'author_detail': {'name': '/u/sachinhegde6',
    'href': 'https://www.reddit.com/user/sachinhegde6'},
   'href': 'https://www.reddit.com/user/sachinhegde6',
   'author': '/u/sachinhegde6',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<table> <tr><td> <a href="https://www.reddit.com/r/DataHoarder/comments/d11t9o/smallest_setup_i_can_think_of_for_a_10_tb_setup/"> <img src="https://b.thumbs.redditmedia.com/cs2c3eEXsJSmJccuYumXyv0IchgmgAqzQCHzdeQhSVg.jpg" alt="Smallest setup i can think of for a 10 TB setup" title="Smallest setup i can think of for a 10 TB setup" /> </a> </td><td> &#32; submitted by &#32; <a href="https://www.reddit.com/user/sachinhegde6"> /u/sachinhegde6 </a> <br/> <span><a href="https://i.redd.it/vo0vpu5hr2l31.jpg">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d11t9o/smallest_setup_i_can_think_of_for_a_10_tb_setup/">[comments]</a></span> </td></tr></table>'}],
   'summary': '<table> <tr><td> <a href="https://www.reddit.com/r/DataHoarder/comments/d11t9o/smallest_setup_i_can_think_of_for_a_10_tb_setup/"> <img src="https://b.thumbs.redditmedia.com/cs2c3eEXsJSmJccuYumXyv0IchgmgAqzQCHzdeQhSVg.jpg" alt="Smallest setup i can think of for a 10 TB setup" title="Smallest setup i can think of for a 10 TB setup" /> </a> </td><td> &#32; submitted by &#32; <a href="https://www.reddit.com/user/sachinhegde6"> /u/sachinhegde6 </a> <br/> <span><a href="https://i.redd.it/vo0vpu5hr2l31.jpg">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d11t9o/smallest_setup_i_can_think_of_for_a_10_tb_setup/">[comments]</a></span> </td></tr></table>',
   'id': 'https://www.reddit.com/r/t3_d11t9o',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d11t9o/smallest_setup_i_can_think_of_for_a_10_tb_setup/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d11t9o/smallest_setup_i_can_think_of_for_a_10_tb_setup/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T21:18:18+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=21, tm_min=18, tm_sec=18, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Smallest setup i can think of for a 10 TB setup',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Smallest setup i can think of for a 10 TB setup'}},
  {'authors': [{'name': '/u/archiopteryx',
     'href': 'https://www.reddit.com/user/archiopteryx'}],
   'author_detail': {'name': '/u/archiopteryx',
    'href': 'https://www.reddit.com/user/archiopteryx'},
   'href': 'https://www.reddit.com/user/archiopteryx',
   'author': '/u/archiopteryx',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>Hello, I&#39;d like your help in backing up thingiverse.com. It&#39;s performance has decreased drastically recently while its importance to the 3d printing community remains strong. It&#39;s unlikely that it will remain in the future since it makes little or no money for Stratasys (the 3D printing company that purchased MakerBot). However, its system for thing URLs is very simple and makes it easy to backup. PM me if you are interested. Initially I&#39;ll give you a list of 10000 or 100000 URLs to backup, later we can make a site to mirror thingiverse and provide STLs with better search features. I have no good estimate of the size of each &quot;thing&quot; in bytes, but a decent overestimate is to say everything is 50Mb+. </p> <p>If you are interested in helping please PM me. There is some interest on <a href="/r/thingiverse">/r/thingiverse</a> but the main concern is data storage, so we need the help of those with very large storage capacity. Thanks for reading this.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/archiopteryx"> /u/archiopteryx </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d10yem/thingiversecom_backup/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d10yem/thingiversecom_backup/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>Hello, I&#39;d like your help in backing up thingiverse.com. It&#39;s performance has decreased drastically recently while its importance to the 3d printing community remains strong. It&#39;s unlikely that it will remain in the future since it makes little or no money for Stratasys (the 3D printing company that purchased MakerBot). However, its system for thing URLs is very simple and makes it easy to backup. PM me if you are interested. Initially I&#39;ll give you a list of 10000 or 100000 URLs to backup, later we can make a site to mirror thingiverse and provide STLs with better search features. I have no good estimate of the size of each &quot;thing&quot; in bytes, but a decent overestimate is to say everything is 50Mb+. </p> <p>If you are interested in helping please PM me. There is some interest on <a href="/r/thingiverse">/r/thingiverse</a> but the main concern is data storage, so we need the help of those with very large storage capacity. Thanks for reading this.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/archiopteryx"> /u/archiopteryx </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d10yem/thingiversecom_backup/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d10yem/thingiversecom_backup/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d10yem',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d10yem/thingiversecom_backup/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d10yem/thingiversecom_backup/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T20:09:41+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=20, tm_min=9, tm_sec=41, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Thingiverse.com Backup',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Thingiverse.com Backup'}},
  {'authors': [{'name': '/u/WPLibrar2',
     'href': 'https://www.reddit.com/user/WPLibrar2'}],
   'author_detail': {'name': '/u/WPLibrar2',
    'href': 'https://www.reddit.com/user/WPLibrar2'},
   'href': 'https://www.reddit.com/user/WPLibrar2',
   'author': '/u/WPLibrar2',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<table> <tr><td> <a href="https://www.reddit.com/r/DataHoarder/comments/d12q5g/pluralsight_offers_all_the_courses_for_free_this/"> <img src="https://b.thumbs.redditmedia.com/tla_CTK5tOzpHZhNvc1sNqwu9qgX-hz9_ZpQlB7G9Uk.jpg" alt="Pluralsight offers all the courses for free this weekend. Archiving possible." title="Pluralsight offers all the courses for free this weekend. Archiving possible." /> </a> </td><td> &#32; submitted by &#32; <a href="https://www.reddit.com/user/WPLibrar2"> /u/WPLibrar2 </a> <br/> <span><a href="https://www.pluralsight.com/offer/2019/september-free-weekend">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d12q5g/pluralsight_offers_all_the_courses_for_free_this/">[comments]</a></span> </td></tr></table>'}],
   'summary': '<table> <tr><td> <a href="https://www.reddit.com/r/DataHoarder/comments/d12q5g/pluralsight_offers_all_the_courses_for_free_this/"> <img src="https://b.thumbs.redditmedia.com/tla_CTK5tOzpHZhNvc1sNqwu9qgX-hz9_ZpQlB7G9Uk.jpg" alt="Pluralsight offers all the courses for free this weekend. Archiving possible." title="Pluralsight offers all the courses for free this weekend. Archiving possible." /> </a> </td><td> &#32; submitted by &#32; <a href="https://www.reddit.com/user/WPLibrar2"> /u/WPLibrar2 </a> <br/> <span><a href="https://www.pluralsight.com/offer/2019/september-free-weekend">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d12q5g/pluralsight_offers_all_the_courses_for_free_this/">[comments]</a></span> </td></tr></table>',
   'id': 'https://www.reddit.com/r/t3_d12q5g',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d12q5g/pluralsight_offers_all_the_courses_for_free_this/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d12q5g/pluralsight_offers_all_the_courses_for_free_this/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T22:33:17+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=22, tm_min=33, tm_sec=17, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Pluralsight offers all the courses for free this weekend. Archiving possible.',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Pluralsight offers all the courses for free this weekend. Archiving possible.'}},
  {'authors': [{'name': '/u/steamfrag',
     'href': 'https://www.reddit.com/user/steamfrag'}],
   'author_detail': {'name': '/u/steamfrag',
    'href': 'https://www.reddit.com/user/steamfrag'},
   'href': 'https://www.reddit.com/user/steamfrag',
   'author': '/u/steamfrag',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>Back in December 2018, <a href="/u/insanehomelesguy">/u/insanehomelesguy</a> <a href="https://old.reddit.com/r/DataHoarder/comments/a8xqwj/50k_images_from_the_art_institute_of_chicago_for/">put together</a> a torrent of artwork from the <a href="https://www.artic.edu/">Art Institute of Chicago</a> after they <a href="https://www.thisiscolossal.com/2018/10/art-institute-of-chicago-image-collection/">released 50,000+ of their images</a> under CC0 public domain.</p> <p>This is an updated torrent I&#39;ve put together with a bunch of changes. The site crawl was done in January 2019 (got sidetracked with other projects).</p> <p>Art changes:</p> <ul> <li>Replaced 1799 damaged files</li> <li>Removed 344 duplicates</li> <li>Removed 10,587 that were not CC0</li> <li>Added 13,013 that were missing</li> <li>Added artworks that contained multiple images</li> <li>Replaced upsized artworks with native image size (the site serves up whatever res you request, and the default is not always native res)</li> </ul> <p>Metadata changes:</p> <ul> <li>Added a tab-delimited metadata file, including image tags</li> <li>Added an archive of per-file extended metadata, including artwork descriptions</li> </ul> <p>File structure changes:</p> <ul> <li>Separated art and metadata folders</li> <li>Renamed files to include artwork ID and no uppercase or unicode</li> <li>Shortened all filenames to 100 characters</li> </ul> <p>Licencing details are here: <a href="https://www.artic.edu/image-licensing">https://www.artic.edu/image-licensing</a><br/> TL;DR: You can use all the artwork in this torrent for any commercial or non-commercial purpose.</p> <p>Torrent: https://me ga. nz/#!b1BEHa6A!yyrgVc1zg8QD0DW8Ot22Y7iLkSRf6c-Dx9bEfkF_wn8<br/> Magnet: magnet:?xt=urn:btih:58c9d1f5cdffe006c7f9dfb88b8e20bbd81efeb0&amp;dn=Art%20Institute%20of%20Chicago&amp;tr=http%3a%2f%2f91.217.91.21%3a3218%2fannounce&amp;tr=udp%3a%2f%2fexplodie.org%3a6969%2fannounce&amp;tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&amp;tr=udp%3a%2f%2ftracker.tiny-vps.com%3a6969%2fannounce&amp;tr=http%3a%2f%2ftracker.tvunderground.org.ru%3a3218%2fannounce&amp;tr=udp%3a%2f%2ftracker.yoshi210.com%3a6969%2fannounce&amp;tr=udp%3a%2f%2f151.80.120.114%3a2710%2fannounce&amp;tr=udp%3a%2f%2f62.138.0.158%3a6969%2fannounce&amp;tr=udp%3a%2f%2f9.rarbg.me%3a2780%2fannounce&amp;tr=udp%3a%2f%2fbt.xxx-tracker.com%3a2710%2fannounce&amp;tr=http%3a%2f%2fexplodie.org%3a6969%2fannounce&amp;tr=udp%3a%2f%2fopen.stealth.si%3a80%2fannounce&amp;tr=udp%3a%2f%2ftracker.coppersurfer.tk%3a6969%2fannounce&amp;tr=http%3a%2f%2ftracker.internetwarriors.net%3a1337%2fannounce&amp;tr=udp%3a%2f%2ftracker.leechers-paradise.org%3a6969%2fannounce&amp;tr=http%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&amp;tr=http%3a%2f%2ftracker.yoshi210.com%3a6969%2fannounce</p> <p>Hope those links work, I haven&#39;t made torrents in a long time. I can only seed 12 hours a day so stick with it if there are no seeds present. Everything you seed above 1:1 increases the strength and speed of the torrent.</p> <p>Honestly I think there&#39;s a lot of junk in this collection, but that&#39;s art for you. There&#39;s also a ton of awesome stuff, and with the metadata tags you could pull all the Renaissance oil paintings or Japanese woodblock prints or whatever. Cycle them as desktop wallpaper or discover something new.</p> <p>Huge thanks to <a href="/u/insanehomelesguy">/u/insanehomelesguy</a> for doing the initial version of this collection.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/steamfrag"> /u/steamfrag </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0wuae/50k_images_from_the_art_institute_of_chicago/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0wuae/50k_images_from_the_art_institute_of_chicago/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>Back in December 2018, <a href="/u/insanehomelesguy">/u/insanehomelesguy</a> <a href="https://old.reddit.com/r/DataHoarder/comments/a8xqwj/50k_images_from_the_art_institute_of_chicago_for/">put together</a> a torrent of artwork from the <a href="https://www.artic.edu/">Art Institute of Chicago</a> after they <a href="https://www.thisiscolossal.com/2018/10/art-institute-of-chicago-image-collection/">released 50,000+ of their images</a> under CC0 public domain.</p> <p>This is an updated torrent I&#39;ve put together with a bunch of changes. The site crawl was done in January 2019 (got sidetracked with other projects).</p> <p>Art changes:</p> <ul> <li>Replaced 1799 damaged files</li> <li>Removed 344 duplicates</li> <li>Removed 10,587 that were not CC0</li> <li>Added 13,013 that were missing</li> <li>Added artworks that contained multiple images</li> <li>Replaced upsized artworks with native image size (the site serves up whatever res you request, and the default is not always native res)</li> </ul> <p>Metadata changes:</p> <ul> <li>Added a tab-delimited metadata file, including image tags</li> <li>Added an archive of per-file extended metadata, including artwork descriptions</li> </ul> <p>File structure changes:</p> <ul> <li>Separated art and metadata folders</li> <li>Renamed files to include artwork ID and no uppercase or unicode</li> <li>Shortened all filenames to 100 characters</li> </ul> <p>Licencing details are here: <a href="https://www.artic.edu/image-licensing">https://www.artic.edu/image-licensing</a><br/> TL;DR: You can use all the artwork in this torrent for any commercial or non-commercial purpose.</p> <p>Torrent: https://me ga. nz/#!b1BEHa6A!yyrgVc1zg8QD0DW8Ot22Y7iLkSRf6c-Dx9bEfkF_wn8<br/> Magnet: magnet:?xt=urn:btih:58c9d1f5cdffe006c7f9dfb88b8e20bbd81efeb0&amp;dn=Art%20Institute%20of%20Chicago&amp;tr=http%3a%2f%2f91.217.91.21%3a3218%2fannounce&amp;tr=udp%3a%2f%2fexplodie.org%3a6969%2fannounce&amp;tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&amp;tr=udp%3a%2f%2ftracker.tiny-vps.com%3a6969%2fannounce&amp;tr=http%3a%2f%2ftracker.tvunderground.org.ru%3a3218%2fannounce&amp;tr=udp%3a%2f%2ftracker.yoshi210.com%3a6969%2fannounce&amp;tr=udp%3a%2f%2f151.80.120.114%3a2710%2fannounce&amp;tr=udp%3a%2f%2f62.138.0.158%3a6969%2fannounce&amp;tr=udp%3a%2f%2f9.rarbg.me%3a2780%2fannounce&amp;tr=udp%3a%2f%2fbt.xxx-tracker.com%3a2710%2fannounce&amp;tr=http%3a%2f%2fexplodie.org%3a6969%2fannounce&amp;tr=udp%3a%2f%2fopen.stealth.si%3a80%2fannounce&amp;tr=udp%3a%2f%2ftracker.coppersurfer.tk%3a6969%2fannounce&amp;tr=http%3a%2f%2ftracker.internetwarriors.net%3a1337%2fannounce&amp;tr=udp%3a%2f%2ftracker.leechers-paradise.org%3a6969%2fannounce&amp;tr=http%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&amp;tr=http%3a%2f%2ftracker.yoshi210.com%3a6969%2fannounce</p> <p>Hope those links work, I haven&#39;t made torrents in a long time. I can only seed 12 hours a day so stick with it if there are no seeds present. Everything you seed above 1:1 increases the strength and speed of the torrent.</p> <p>Honestly I think there&#39;s a lot of junk in this collection, but that&#39;s art for you. There&#39;s also a ton of awesome stuff, and with the metadata tags you could pull all the Renaissance oil paintings or Japanese woodblock prints or whatever. Cycle them as desktop wallpaper or discover something new.</p> <p>Huge thanks to <a href="/u/insanehomelesguy">/u/insanehomelesguy</a> for doing the initial version of this collection.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/steamfrag"> /u/steamfrag </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0wuae/50k_images_from_the_art_institute_of_chicago/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0wuae/50k_images_from_the_art_institute_of_chicago/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0wuae',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0wuae/50k_images_from_the_art_institute_of_chicago/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0wuae/50k_images_from_the_art_institute_of_chicago/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T14:46:05+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=14, tm_min=46, tm_sec=5, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': '50K Images from the Art Institute of Chicago, version 2',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': '50K Images from the Art Institute of Chicago, version 2'}},
  {'authors': [{'name': '/u/AppleNikonBose',
     'href': 'https://www.reddit.com/user/AppleNikonBose'}],
   'author_detail': {'name': '/u/AppleNikonBose',
    'href': 'https://www.reddit.com/user/AppleNikonBose'},
   'href': 'https://www.reddit.com/user/AppleNikonBose',
   'author': '/u/AppleNikonBose',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<table> <tr><td> <a href="https://www.reddit.com/r/DataHoarder/comments/d11xik/another_hoarding_award_but_still_so_annoying_all/"> <img src="https://b.thumbs.redditmedia.com/AwL5Is6euhCj16zV5EsQK8CaOGjPUX9lmJmklA9--Fc.jpg" alt="Another hoarding award. But still so annoying all these message, because then we need to stop searching and continue in a second time (no matter if from scientific research, social, video, photo, etc website; or from google or other site). i got it from every website." title="Another hoarding award. But still so annoying all these message, because then we need to stop searching and continue in a second time (no matter if from scientific research, social, video, photo, etc website; or from google or other site). i got it from every website." /> </a> </td><td> &#32; submitted by &#32; <a href="https://www.reddit.com/user/AppleNikonBose"> /u/AppleNikonBose </a> <br/> <span><a href="https://i.redd.it/y0yqbyiko8l31.jpg">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d11xik/another_hoarding_award_but_still_so_annoying_all/">[comments]</a></span> </td></tr></table>'}],
   'summary': '<table> <tr><td> <a href="https://www.reddit.com/r/DataHoarder/comments/d11xik/another_hoarding_award_but_still_so_annoying_all/"> <img src="https://b.thumbs.redditmedia.com/AwL5Is6euhCj16zV5EsQK8CaOGjPUX9lmJmklA9--Fc.jpg" alt="Another hoarding award. But still so annoying all these message, because then we need to stop searching and continue in a second time (no matter if from scientific research, social, video, photo, etc website; or from google or other site). i got it from every website." title="Another hoarding award. But still so annoying all these message, because then we need to stop searching and continue in a second time (no matter if from scientific research, social, video, photo, etc website; or from google or other site). i got it from every website." /> </a> </td><td> &#32; submitted by &#32; <a href="https://www.reddit.com/user/AppleNikonBose"> /u/AppleNikonBose </a> <br/> <span><a href="https://i.redd.it/y0yqbyiko8l31.jpg">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d11xik/another_hoarding_award_but_still_so_annoying_all/">[comments]</a></span> </td></tr></table>',
   'id': 'https://www.reddit.com/r/t3_d11xik',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d11xik/another_hoarding_award_but_still_so_annoying_all/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d11xik/another_hoarding_award_but_still_so_annoying_all/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T21:27:25+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=21, tm_min=27, tm_sec=25, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Another hoarding award. But still so annoying all these message, because then we need to stop searching and continue in a second time (no matter if from scientific research, social, video, photo, etc website; or from google or other site). i got it from every website.',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Another hoarding award. But still so annoying all these message, because then we need to stop searching and continue in a second time (no matter if from scientific research, social, video, photo, etc website; or from google or other site). i got it from every website.'}},
  {'authors': [{'name': '/u/dangledoodles',
     'href': 'https://www.reddit.com/user/dangledoodles'}],
   'author_detail': {'name': '/u/dangledoodles',
    'href': 'https://www.reddit.com/user/dangledoodles'},
   'href': 'https://www.reddit.com/user/dangledoodles',
   'author': '/u/dangledoodles',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>I have loads of saved things that i want to download and organise but i don&#39;t want to have to do it manually. So i was wondering if there is a tool that will let me download all of my saved items and put them into individual folders that are called the subreddit name. And then i&#39;d upload them too gdrive.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/dangledoodles"> /u/dangledoodles </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0z086/is_there_anything_i_can_use_to_download_all_my/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0z086/is_there_anything_i_can_use_to_download_all_my/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>I have loads of saved things that i want to download and organise but i don&#39;t want to have to do it manually. So i was wondering if there is a tool that will let me download all of my saved items and put them into individual folders that are called the subreddit name. And then i&#39;d upload them too gdrive.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/dangledoodles"> /u/dangledoodles </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0z086/is_there_anything_i_can_use_to_download_all_my/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0z086/is_there_anything_i_can_use_to_download_all_my/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0z086',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0z086/is_there_anything_i_can_use_to_download_all_my/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0z086/is_there_anything_i_can_use_to_download_all_my/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T17:37:12+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=17, tm_min=37, tm_sec=12, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Is there anything i can use to download all my saved items on reddit and organise them by subreddit?',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Is there anything i can use to download all my saved items on reddit and organise them by subreddit?'}},
  {'authors': [{'name': '/u/BlackNight0wl',
     'href': 'https://www.reddit.com/user/BlackNight0wl'}],
   'author_detail': {'name': '/u/BlackNight0wl',
    'href': 'https://www.reddit.com/user/BlackNight0wl'},
   'href': 'https://www.reddit.com/user/BlackNight0wl',
   'author': '/u/BlackNight0wl',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>Not sure if this would work or if you guys could mention alternatives.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/BlackNight0wl"> /u/BlackNight0wl </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0vrhg/youtubedl_for_spotify_playlists/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0vrhg/youtubedl_for_spotify_playlists/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>Not sure if this would work or if you guys could mention alternatives.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/BlackNight0wl"> /u/BlackNight0wl </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0vrhg/youtubedl_for_spotify_playlists/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0vrhg/youtubedl_for_spotify_playlists/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0vrhg',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0vrhg/youtubedl_for_spotify_playlists/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0vrhg/youtubedl_for_spotify_playlists/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T13:08:07+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=13, tm_min=8, tm_sec=7, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Youtube-DL: For Spotify Playlists?',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Youtube-DL: For Spotify Playlists?'}},
  {'authors': [{'name': '/u/Phastor',
     'href': 'https://www.reddit.com/user/Phastor'}],
   'author_detail': {'name': '/u/Phastor',
    'href': 'https://www.reddit.com/user/Phastor'},
   'href': 'https://www.reddit.com/user/Phastor',
   'author': '/u/Phastor',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>I&#39;ve been looking at <a href="https://www.amazon.com/gp/product/B00DGZ42SM/">these</a> for a while and just now noticed that they are claiming to be able to handle SAS drives. While the backplane inside is indeed able to physically accept a SAS drive, the incoming connections to it are SATA. I know you can connect a controller capable of running SAS to this enclosure with breakout cables like <a href="https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC">these</a>, but those are still SATA connections going in. Doesn&#39;t SAS require extra data lines that those incoming SATA connections don&#39;t provide?</p> <p>I know you can connect SATA drives to a SAS controller, but not the other way around.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Phastor"> /u/Phastor </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0z2zs/how_are_these_hot_swap_enclosures_able_to_use_sas/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0z2zs/how_are_these_hot_swap_enclosures_able_to_use_sas/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>I&#39;ve been looking at <a href="https://www.amazon.com/gp/product/B00DGZ42SM/">these</a> for a while and just now noticed that they are claiming to be able to handle SAS drives. While the backplane inside is indeed able to physically accept a SAS drive, the incoming connections to it are SATA. I know you can connect a controller capable of running SAS to this enclosure with breakout cables like <a href="https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC">these</a>, but those are still SATA connections going in. Doesn&#39;t SAS require extra data lines that those incoming SATA connections don&#39;t provide?</p> <p>I know you can connect SATA drives to a SAS controller, but not the other way around.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Phastor"> /u/Phastor </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0z2zs/how_are_these_hot_swap_enclosures_able_to_use_sas/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0z2zs/how_are_these_hot_swap_enclosures_able_to_use_sas/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0z2zs',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0z2zs/how_are_these_hot_swap_enclosures_able_to_use_sas/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0z2zs/how_are_these_hot_swap_enclosures_able_to_use_sas/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T17:43:21+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=17, tm_min=43, tm_sec=21, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'How are these hot swap enclosures able to use SAS?',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'How are these hot swap enclosures able to use SAS?'}},
  {'authors': [{'name': '/u/karlicoss',
     'href': 'https://www.reddit.com/user/karlicoss'}],
   'author_detail': {'name': '/u/karlicoss',
    'href': 'https://www.reddit.com/user/karlicoss'},
   'href': 'https://www.reddit.com/user/karlicoss',
   'author': '/u/karlicoss',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<table> <tr><td> <a href="https://www.reddit.com/r/DataHoarder/comments/d0hjs7/reddit_takeout_export_your_account_data_as_json/"> <img src="https://b.thumbs.redditmedia.com/UeMlOcGppRLksW-FAw6cfHcXsGQkxdo7fGcjjVP0Ucg.jpg" alt="Reddit takeout: export your account data as JSON: comments, submissions, upvotes, etc." title="Reddit takeout: export your account data as JSON: comments, submissions, upvotes, etc." /> </a> </td><td> &#32; submitted by &#32; <a href="https://www.reddit.com/user/karlicoss"> /u/karlicoss </a> <br/> <span><a href="https://github.com/karlicoss/rexport">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0hjs7/reddit_takeout_export_your_account_data_as_json/">[comments]</a></span> </td></tr></table>'}],
   'summary': '<table> <tr><td> <a href="https://www.reddit.com/r/DataHoarder/comments/d0hjs7/reddit_takeout_export_your_account_data_as_json/"> <img src="https://b.thumbs.redditmedia.com/UeMlOcGppRLksW-FAw6cfHcXsGQkxdo7fGcjjVP0Ucg.jpg" alt="Reddit takeout: export your account data as JSON: comments, submissions, upvotes, etc." title="Reddit takeout: export your account data as JSON: comments, submissions, upvotes, etc." /> </a> </td><td> &#32; submitted by &#32; <a href="https://www.reddit.com/user/karlicoss"> /u/karlicoss </a> <br/> <span><a href="https://github.com/karlicoss/rexport">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0hjs7/reddit_takeout_export_your_account_data_as_json/">[comments]</a></span> </td></tr></table>',
   'id': 'https://www.reddit.com/r/t3_d0hjs7',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0hjs7/reddit_takeout_export_your_account_data_as_json/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0hjs7/reddit_takeout_export_your_account_data_as_json/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-06T14:41:13+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=6, tm_hour=14, tm_min=41, tm_sec=13, tm_wday=4, tm_yday=249, tm_isdst=0),
   'title': 'Reddit takeout: export your account data as JSON: comments, submissions, upvotes, etc.',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Reddit takeout: export your account data as JSON: comments, submissions, upvotes, etc.'}},
  {'authors': [{'name': '/u/Cadiz215',
     'href': 'https://www.reddit.com/user/Cadiz215'}],
   'author_detail': {'name': '/u/Cadiz215',
    'href': 'https://www.reddit.com/user/Cadiz215'},
   'href': 'https://www.reddit.com/user/Cadiz215',
   'author': '/u/Cadiz215',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>I had a fairly robust at 40TB Plex server but I needed for space. So my buddy <a href="/u/Gohan472">u/Gohan472</a> and me set out to upgrade my server case and storage capacity to 100TB. </p> <p>We decided to keep the other existing hardware since there’s no problem with any of the parts and transplant them into a new 24 bay Norco chassis and also add a few new parts. Below is a list of parts:</p> <p>CPU: AMD 1950x Threadripper (16 cores, 32 Threads)</p> <p>RAM: 64 GB DDR4 @3600Mhz</p> <p>Motherboard: Gigabyte X399 Aorus 7</p> <p>Hard Drives: 10 x 10TB Iron Wolf = 100TB</p> <p>GPU: NVIDIA Quadro P4000</p> <p>Case: Norco 24 Bay Chassis (RPC-4224)</p> <p>Storage: 2x Samsung 960 EVO 250GB M.2 </p> <p>I currently have about 3,200 movies and over 300 TV shows. The plan is to build a large 4K movie library of Remux files and also to upgrade some of the shows to 1080p Remux files from the standard 2-3GB per file size now. Anyway here are the pictures of our work together, hope you appreciate it fellow hoarders.</p> <p><a href="https://imgur.com/gallery/iS63BhC">https://imgur.com/gallery/iS63BhC</a></p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Cadiz215"> /u/Cadiz215 </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0mclp/new_100tb_plex_build/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0mclp/new_100tb_plex_build/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>I had a fairly robust at 40TB Plex server but I needed for space. So my buddy <a href="/u/Gohan472">u/Gohan472</a> and me set out to upgrade my server case and storage capacity to 100TB. </p> <p>We decided to keep the other existing hardware since there’s no problem with any of the parts and transplant them into a new 24 bay Norco chassis and also add a few new parts. Below is a list of parts:</p> <p>CPU: AMD 1950x Threadripper (16 cores, 32 Threads)</p> <p>RAM: 64 GB DDR4 @3600Mhz</p> <p>Motherboard: Gigabyte X399 Aorus 7</p> <p>Hard Drives: 10 x 10TB Iron Wolf = 100TB</p> <p>GPU: NVIDIA Quadro P4000</p> <p>Case: Norco 24 Bay Chassis (RPC-4224)</p> <p>Storage: 2x Samsung 960 EVO 250GB M.2 </p> <p>I currently have about 3,200 movies and over 300 TV shows. The plan is to build a large 4K movie library of Remux files and also to upgrade some of the shows to 1080p Remux files from the standard 2-3GB per file size now. Anyway here are the pictures of our work together, hope you appreciate it fellow hoarders.</p> <p><a href="https://imgur.com/gallery/iS63BhC">https://imgur.com/gallery/iS63BhC</a></p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Cadiz215"> /u/Cadiz215 </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0mclp/new_100tb_plex_build/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0mclp/new_100tb_plex_build/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0mclp',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0mclp/new_100tb_plex_build/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0mclp/new_100tb_plex_build/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-06T20:46:56+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=6, tm_hour=20, tm_min=46, tm_sec=56, tm_wday=4, tm_yday=249, tm_isdst=0),
   'title': 'New 100TB Plex Build',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'New 100TB Plex Build'}},
  {'authors': [{'name': '/u/InfaSyn',
     'href': 'https://www.reddit.com/user/InfaSyn'}],
   'author_detail': {'name': '/u/InfaSyn',
    'href': 'https://www.reddit.com/user/InfaSyn'},
   'href': 'https://www.reddit.com/user/InfaSyn',
   'author': '/u/InfaSyn',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '&#32; submitted by &#32; <a href="https://www.reddit.com/user/InfaSyn"> /u/InfaSyn </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0x9th/i_regularly_use_a_friends_plex_server_that_will/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0x9th/i_regularly_use_a_friends_plex_server_that_will/">[comments]</a></span>'}],
   'summary': '&#32; submitted by &#32; <a href="https://www.reddit.com/user/InfaSyn"> /u/InfaSyn </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0x9th/i_regularly_use_a_friends_plex_server_that_will/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0x9th/i_regularly_use_a_friends_plex_server_that_will/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0x9th',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0x9th/i_regularly_use_a_friends_plex_server_that_will/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0x9th/i_regularly_use_a_friends_plex_server_that_will/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T15:20:58+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=15, tm_min=20, tm_sec=58, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'I regularly use a friends PLEX server that will be going down soon. Is there anyway for me to download the content for good to add to my own? (Basically, PLEX->MP4)',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'I regularly use a friends PLEX server that will be going down soon. Is there anyway for me to download the content for good to add to my own? (Basically, PLEX->MP4)'}},
  {'authors': [{'name': '/u/GonziHere',
     'href': 'https://www.reddit.com/user/GonziHere'}],
   'author_detail': {'name': '/u/GonziHere',
    'href': 'https://www.reddit.com/user/GonziHere'},
   'href': 'https://www.reddit.com/user/GonziHere',
   'author': '/u/GonziHere',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>Is RAID 6 really &quot;that bad&quot;? I am <strong>about to buy</strong> and install that kind of solution (with further cold backups of photos on blu-rays and extremely important data also living on cloud) and while I get that &quot;rebuilding raid array&quot; takes some time and can fail in the mean time (and because how hard it actually is to do so), I still cannot see the scenario where this fails (without &quot;fire&quot; kind of events), <strong>while other, simmilary priced solutions wouldn&#39;t</strong>.</p> <p>I mean, it&#39;s easy to find stuff like &quot;RAID 6 is dead&quot; or &quot;don&#39;t ever use RAID 6&quot;<strong>,</strong> but </p> <p><strong>what is the alternative? What should I do instead?</strong></p> <p><strong>Is there cheaper setup with the same (or even better) reliability?</strong></p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/GonziHere"> /u/GonziHere </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d11h7m/raid_6_on_qnap_nas_4x4tb_hdd/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d11h7m/raid_6_on_qnap_nas_4x4tb_hdd/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>Is RAID 6 really &quot;that bad&quot;? I am <strong>about to buy</strong> and install that kind of solution (with further cold backups of photos on blu-rays and extremely important data also living on cloud) and while I get that &quot;rebuilding raid array&quot; takes some time and can fail in the mean time (and because how hard it actually is to do so), I still cannot see the scenario where this fails (without &quot;fire&quot; kind of events), <strong>while other, simmilary priced solutions wouldn&#39;t</strong>.</p> <p>I mean, it&#39;s easy to find stuff like &quot;RAID 6 is dead&quot; or &quot;don&#39;t ever use RAID 6&quot;<strong>,</strong> but </p> <p><strong>what is the alternative? What should I do instead?</strong></p> <p><strong>Is there cheaper setup with the same (or even better) reliability?</strong></p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/GonziHere"> /u/GonziHere </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d11h7m/raid_6_on_qnap_nas_4x4tb_hdd/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d11h7m/raid_6_on_qnap_nas_4x4tb_hdd/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d11h7m',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d11h7m/raid_6_on_qnap_nas_4x4tb_hdd/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d11h7m/raid_6_on_qnap_nas_4x4tb_hdd/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T20:51:34+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=20, tm_min=51, tm_sec=34, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'RAID 6 on QNAP NAS 4x4TB HDD',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'RAID 6 on QNAP NAS 4x4TB HDD'}},
  {'authors': [{'name': '/u/GonziHere',
     'href': 'https://www.reddit.com/user/GonziHere'}],
   'author_detail': {'name': '/u/GonziHere',
    'href': 'https://www.reddit.com/user/GonziHere'},
   'href': 'https://www.reddit.com/user/GonziHere',
   'author': '/u/GonziHere',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>Hi, firstly, this community is great, as I am about to solve my photo issues and in general, this community had the best info that I could find on the internet, so thank you for that.</p> <p>Secondly, We have &quot;family photo collection&quot; that spans few hundred gigs and while we have multitude of copies, we have them all around the place. I want to centralize it on NAS (with some redundancy) and then back them up via other means. My biggest issue right now is: &quot;how do I actually merge it&quot; - we currently have that collection spread on 3 notebooks, 1 PC and 3 external HDDS - and the problem is, that we actually deleted some unwanted photos in some folders and not in others. I would like to be able to see these differences, so that I can decide on file-by-file / folder-by-folder basis. Does anyone have any tips?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/GonziHere"> /u/GonziHere </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d11cde/merging_photo_folders/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d11cde/merging_photo_folders/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>Hi, firstly, this community is great, as I am about to solve my photo issues and in general, this community had the best info that I could find on the internet, so thank you for that.</p> <p>Secondly, We have &quot;family photo collection&quot; that spans few hundred gigs and while we have multitude of copies, we have them all around the place. I want to centralize it on NAS (with some redundancy) and then back them up via other means. My biggest issue right now is: &quot;how do I actually merge it&quot; - we currently have that collection spread on 3 notebooks, 1 PC and 3 external HDDS - and the problem is, that we actually deleted some unwanted photos in some folders and not in others. I would like to be able to see these differences, so that I can decide on file-by-file / folder-by-folder basis. Does anyone have any tips?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/GonziHere"> /u/GonziHere </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d11cde/merging_photo_folders/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d11cde/merging_photo_folders/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d11cde',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d11cde/merging_photo_folders/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d11cde/merging_photo_folders/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T20:40:55+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=20, tm_min=40, tm_sec=55, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'merging photo folders',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'merging photo folders'}},
  {'authors': [{'name': '/u/Kiwi_birds',
     'href': 'https://www.reddit.com/user/Kiwi_birds'}],
   'author_detail': {'name': '/u/Kiwi_birds',
    'href': 'https://www.reddit.com/user/Kiwi_birds'},
   'href': 'https://www.reddit.com/user/Kiwi_birds',
   'author': '/u/Kiwi_birds',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>I have an idea on where I want to start. I think I may want to keep a backlog of ever arch Linux iso since I think that&#39;d be nice to have and I don&#39;t know too many people who are backing those up. Is that dumb since arch is rolling and I shouldn&#39;t hold my breath or should I do it for the hell of it?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Kiwi_birds"> /u/Kiwi_birds </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0wngk/where_should_i_start_with_hoarding_data/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0wngk/where_should_i_start_with_hoarding_data/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>I have an idea on where I want to start. I think I may want to keep a backlog of ever arch Linux iso since I think that&#39;d be nice to have and I don&#39;t know too many people who are backing those up. Is that dumb since arch is rolling and I shouldn&#39;t hold my breath or should I do it for the hell of it?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Kiwi_birds"> /u/Kiwi_birds </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0wngk/where_should_i_start_with_hoarding_data/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0wngk/where_should_i_start_with_hoarding_data/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0wngk',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0wngk/where_should_i_start_with_hoarding_data/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0wngk/where_should_i_start_with_hoarding_data/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T14:29:37+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=14, tm_min=29, tm_sec=37, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Where should I start with hoarding data?',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Where should I start with hoarding data?'}},
  {'authors': [{'name': '/u/cns000',
     'href': 'https://www.reddit.com/user/cns000'}],
   'author_detail': {'name': '/u/cns000',
    'href': 'https://www.reddit.com/user/cns000'},
   'href': 'https://www.reddit.com/user/cns000',
   'author': '/u/cns000',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>4 years ago i bought a 2tb external hard disk and synology ds215j nas drive and i put in it two 2tb nas disks and i set them to mirroring. i am making backups of my files on both nas drive and external drive. i download movies, games and pictures</p> <p>a couple of months ago i started to run out of hard disk space on both so i upgraded and i bought a 4tb external hard disk and i removed the two nas disks and i put two 4tb nas disks and i transferred all my files from old disks to new disks. at the rate at which i download files i will run out of space again in a few years and i dont know what to do next</p> <p>i can take out the two nas disks and put two 6tb disks but what about the external drive? a 6tb external drive will be big a tower and i dont like that thus i am thinking of only using a nas drive but if i do that then i need something more complex than the ds215j</p> <p>i need something which has more disk bays and it should have high security and it should block viruses if a computer which is connected to the same internet network got infected. what do i get and what is the recommended raid level?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/cns000"> /u/cns000 </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0zt1j/how_should_i_back_up_my_files_in_the_future/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0zt1j/how_should_i_back_up_my_files_in_the_future/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>4 years ago i bought a 2tb external hard disk and synology ds215j nas drive and i put in it two 2tb nas disks and i set them to mirroring. i am making backups of my files on both nas drive and external drive. i download movies, games and pictures</p> <p>a couple of months ago i started to run out of hard disk space on both so i upgraded and i bought a 4tb external hard disk and i removed the two nas disks and i put two 4tb nas disks and i transferred all my files from old disks to new disks. at the rate at which i download files i will run out of space again in a few years and i dont know what to do next</p> <p>i can take out the two nas disks and put two 6tb disks but what about the external drive? a 6tb external drive will be big a tower and i dont like that thus i am thinking of only using a nas drive but if i do that then i need something more complex than the ds215j</p> <p>i need something which has more disk bays and it should have high security and it should block viruses if a computer which is connected to the same internet network got infected. what do i get and what is the recommended raid level?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/cns000"> /u/cns000 </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0zt1j/how_should_i_back_up_my_files_in_the_future/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0zt1j/how_should_i_back_up_my_files_in_the_future/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0zt1j',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0zt1j/how_should_i_back_up_my_files_in_the_future/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0zt1j/how_should_i_back_up_my_files_in_the_future/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T18:39:57+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=18, tm_min=39, tm_sec=57, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'how should i back up my files in the future',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'how should i back up my files in the future'}},
  {'authors': [{'name': '/u/xtphty',
     'href': 'https://www.reddit.com/user/xtphty'}],
   'author_detail': {'name': '/u/xtphty',
    'href': 'https://www.reddit.com/user/xtphty'},
   'href': 'https://www.reddit.com/user/xtphty',
   'author': '/u/xtphty',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>I&#39;m looking to build some fast and compact storage, preferably with hardware encryption and raid-z1 capabilities (yes I know thats paranoid and won&#39;t give anything for speed). Looking at m.2 over 2.5&quot; enclosures first because would be sooo much smaller, but closest thing I have found is this: <a href="https://www.qnap.com/en-us/product/tbs-453a">https://www.qnap.com/en-us/product/tbs-453a</a>. I can&#39;t find that cheaper than $999 which is a bit insane, would prefer to drop that much on the SSDs themselves. Are there any other m.2 enclosures available? Or can I build my own somehow while keeping it compact?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/xtphty"> /u/xtphty </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0zqcp/any_good_m2_sdd_4bay_enclosures/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0zqcp/any_good_m2_sdd_4bay_enclosures/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>I&#39;m looking to build some fast and compact storage, preferably with hardware encryption and raid-z1 capabilities (yes I know thats paranoid and won&#39;t give anything for speed). Looking at m.2 over 2.5&quot; enclosures first because would be sooo much smaller, but closest thing I have found is this: <a href="https://www.qnap.com/en-us/product/tbs-453a">https://www.qnap.com/en-us/product/tbs-453a</a>. I can&#39;t find that cheaper than $999 which is a bit insane, would prefer to drop that much on the SSDs themselves. Are there any other m.2 enclosures available? Or can I build my own somehow while keeping it compact?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/xtphty"> /u/xtphty </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0zqcp/any_good_m2_sdd_4bay_enclosures/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0zqcp/any_good_m2_sdd_4bay_enclosures/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0zqcp',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0zqcp/any_good_m2_sdd_4bay_enclosures/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0zqcp/any_good_m2_sdd_4bay_enclosures/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T18:34:11+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=18, tm_min=34, tm_sec=11, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Any good m.2 sdd 4-bay enclosures?',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Any good m.2 sdd 4-bay enclosures?'}},
  {'authors': [{'name': '/u/syco54645',
     'href': 'https://www.reddit.com/user/syco54645'}],
   'author_detail': {'name': '/u/syco54645',
    'href': 'https://www.reddit.com/user/syco54645'},
   'href': 'https://www.reddit.com/user/syco54645',
   'author': '/u/syco54645',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>Does anyone know how to rip content from pluto.tv? Trying to find a way to archive some of the shows on there.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/syco54645"> /u/syco54645 </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0z4uf/ripping_content_from_plutotv/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0z4uf/ripping_content_from_plutotv/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>Does anyone know how to rip content from pluto.tv? Trying to find a way to archive some of the shows on there.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/syco54645"> /u/syco54645 </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0z4uf/ripping_content_from_plutotv/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0z4uf/ripping_content_from_plutotv/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0z4uf',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0z4uf/ripping_content_from_plutotv/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0z4uf/ripping_content_from_plutotv/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T17:47:46+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=17, tm_min=47, tm_sec=46, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Ripping Content From Pluto.tv',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Ripping Content From Pluto.tv'}},
  {'authors': [{'name': '/u/3ds_2ds',
     'href': 'https://www.reddit.com/user/3ds_2ds'}],
   'author_detail': {'name': '/u/3ds_2ds',
    'href': 'https://www.reddit.com/user/3ds_2ds'},
   'href': 'https://www.reddit.com/user/3ds_2ds',
   'author': '/u/3ds_2ds',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>I am running Ubuntu server 18.04 and using an ICY BOX IB-RD2253-U31 enclosure with two WD Black disks (no NAS disks) in RAID 1 mode (mirror).</p> <p>I want to put the two disks in sleep mode with hdparm (lowest power consumption sleep mode) with <code>sudo hdparm -Y /dev/sdb</code><br/> but the disks seem to run like nothing happened. I also tried <code>sudo udisksctl power-off -b /dev/sdb</code><br/> : This causes the OS to eject the drives, but the disks are still running.</p> <p>Is this the fault of the enclosure? And is there any way to spin down the disks completely until needed again?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/3ds_2ds"> /u/3ds_2ds </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0w5kh/spin_down_of_icy_box_raid_enclosure_not_working/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0w5kh/spin_down_of_icy_box_raid_enclosure_not_working/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>I am running Ubuntu server 18.04 and using an ICY BOX IB-RD2253-U31 enclosure with two WD Black disks (no NAS disks) in RAID 1 mode (mirror).</p> <p>I want to put the two disks in sleep mode with hdparm (lowest power consumption sleep mode) with <code>sudo hdparm -Y /dev/sdb</code><br/> but the disks seem to run like nothing happened. I also tried <code>sudo udisksctl power-off -b /dev/sdb</code><br/> : This causes the OS to eject the drives, but the disks are still running.</p> <p>Is this the fault of the enclosure? And is there any way to spin down the disks completely until needed again?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/3ds_2ds"> /u/3ds_2ds </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0w5kh/spin_down_of_icy_box_raid_enclosure_not_working/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0w5kh/spin_down_of_icy_box_raid_enclosure_not_working/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0w5kh',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0w5kh/spin_down_of_icy_box_raid_enclosure_not_working/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0w5kh/spin_down_of_icy_box_raid_enclosure_not_working/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T13:45:57+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=13, tm_min=45, tm_sec=57, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Spin down of Icy Box Raid Enclosure not working',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Spin down of Icy Box Raid Enclosure not working'}},
  {'authors': [{'name': '/u/botterway',
     'href': 'https://www.reddit.com/user/botterway'}],
   'author_detail': {'name': '/u/botterway',
    'href': 'https://www.reddit.com/user/botterway'},
   'href': 'https://www.reddit.com/user/botterway',
   'author': '/u/botterway',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>Continuing in my quest for large-scale photo management*, I came across Piwigo. Seems like it might do what I need. Before I go to the trouble of setting it up, has anyone played with it? Primary question I have - can it be pointed at a tree of photos and index/import them? Or does it pull the photos into its own storage - effectively duplicating all of the photos? Depending on what I&#39;ve read, I&#39;ve seen responses that imply either is the case, but looking for the actual answer. Don&#39;t want to go to the trouble of setting it up and configuring it to find that the only way it works is to make a second copy of my 2.5TB of photographs. ;)</p> <p>Thanks in advance!</p> <p>*<a href="https://www.reddit.com/r/DataHoarder/comments/cx54kl/managing_a_large_collection_450000_25tb_of/">https://www.reddit.com/r/DataHoarder/comments/cx54kl/managing_a_large_collection_450000_25tb_of/</a></p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/botterway"> /u/botterway </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0u37s/piwigo_for_largescale_photo_management/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0u37s/piwigo_for_largescale_photo_management/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>Continuing in my quest for large-scale photo management*, I came across Piwigo. Seems like it might do what I need. Before I go to the trouble of setting it up, has anyone played with it? Primary question I have - can it be pointed at a tree of photos and index/import them? Or does it pull the photos into its own storage - effectively duplicating all of the photos? Depending on what I&#39;ve read, I&#39;ve seen responses that imply either is the case, but looking for the actual answer. Don&#39;t want to go to the trouble of setting it up and configuring it to find that the only way it works is to make a second copy of my 2.5TB of photographs. ;)</p> <p>Thanks in advance!</p> <p>*<a href="https://www.reddit.com/r/DataHoarder/comments/cx54kl/managing_a_large_collection_450000_25tb_of/">https://www.reddit.com/r/DataHoarder/comments/cx54kl/managing_a_large_collection_450000_25tb_of/</a></p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/botterway"> /u/botterway </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0u37s/piwigo_for_largescale_photo_management/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0u37s/piwigo_for_largescale_photo_management/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0u37s',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0u37s/piwigo_for_largescale_photo_management/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0u37s/piwigo_for_largescale_photo_management/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T09:47:46+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=9, tm_min=47, tm_sec=46, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Piwigo for large-scale photo management?',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Piwigo for large-scale photo management?'}},
  {'authors': [{'name': '/u/Liquid_Magic',
     'href': 'https://www.reddit.com/user/Liquid_Magic'}],
   'author_detail': {'name': '/u/Liquid_Magic',
    'href': 'https://www.reddit.com/user/Liquid_Magic'},
   'href': 'https://www.reddit.com/user/Liquid_Magic',
   'author': '/u/Liquid_Magic',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '&#32; submitted by &#32; <a href="https://www.reddit.com/user/Liquid_Magic"> /u/Liquid_Magic </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0qz1h/whats_the_best_way_to_create_and_use_an_offline/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0qz1h/whats_the_best_way_to_create_and_use_an_offline/">[comments]</a></span>'}],
   'summary': '&#32; submitted by &#32; <a href="https://www.reddit.com/user/Liquid_Magic"> /u/Liquid_Magic </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0qz1h/whats_the_best_way_to_create_and_use_an_offline/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0qz1h/whats_the_best_way_to_create_and_use_an_offline/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0qz1h',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0qz1h/whats_the_best_way_to_create_and_use_an_offline/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0qz1h/whats_the_best_way_to_create_and_use_an_offline/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T03:22:51+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=3, tm_min=22, tm_sec=51, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'What’s the best way to create and use an offline snapshot version of Wikipedia?',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'What’s the best way to create and use an offline snapshot version of Wikipedia?'}},
  {'authors': [{'name': '/u/Crapicus_Maximus',
     'href': 'https://www.reddit.com/user/Crapicus_Maximus'}],
   'author_detail': {'name': '/u/Crapicus_Maximus',
    'href': 'https://www.reddit.com/user/Crapicus_Maximus'},
   'href': 'https://www.reddit.com/user/Crapicus_Maximus',
   'author': '/u/Crapicus_Maximus',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>I just finished migrating my download server over to a Raspberry Pi 3. I have my Gdrive setup and everything is working as far as the downloading and uploading of my content, my issues is when I wake up in the morning sometimes the apps mentioned in the title have dropped out. So at this point I have to either reboot the pi to have them auto start again or ssh in and restart them all individually, can someone here help me setup a script or other method that will monitor these services and when they drop out start again automatically,</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Crapicus_Maximus"> /u/Crapicus_Maximus </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0ydij/script_to_auto_restart_services_sabnzbd_sonarr/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0ydij/script_to_auto_restart_services_sabnzbd_sonarr/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>I just finished migrating my download server over to a Raspberry Pi 3. I have my Gdrive setup and everything is working as far as the downloading and uploading of my content, my issues is when I wake up in the morning sometimes the apps mentioned in the title have dropped out. So at this point I have to either reboot the pi to have them auto start again or ssh in and restart them all individually, can someone here help me setup a script or other method that will monitor these services and when they drop out start again automatically,</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Crapicus_Maximus"> /u/Crapicus_Maximus </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0ydij/script_to_auto_restart_services_sabnzbd_sonarr/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0ydij/script_to_auto_restart_services_sabnzbd_sonarr/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0ydij',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0ydij/script_to_auto_restart_services_sabnzbd_sonarr/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0ydij/script_to_auto_restart_services_sabnzbd_sonarr/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T16:48:08+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=16, tm_min=48, tm_sec=8, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Script to auto restart services (Sabnzbd, Sonarr, Radarr) when they drop out',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Script to auto restart services (Sabnzbd, Sonarr, Radarr) when they drop out'}},
  {'authors': [{'name': '/u/SirReal14',
     'href': 'https://www.reddit.com/user/SirReal14'}],
   'author_detail': {'name': '/u/SirReal14',
    'href': 'https://www.reddit.com/user/SirReal14'},
   'href': 'https://www.reddit.com/user/SirReal14',
   'author': '/u/SirReal14',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '&#32; submitted by &#32; <a href="https://www.reddit.com/user/SirReal14"> /u/SirReal14 </a> <br/> <span><a href="https://old.reddit.com/r/wallstreetbets/comments/d0pc58/youtube_has_just_deleted_shkrelis_account/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0r2w9/youtube_has_just_deleted_shkrelis_account/">[comments]</a></span>'}],
   'summary': '&#32; submitted by &#32; <a href="https://www.reddit.com/user/SirReal14"> /u/SirReal14 </a> <br/> <span><a href="https://old.reddit.com/r/wallstreetbets/comments/d0pc58/youtube_has_just_deleted_shkrelis_account/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0r2w9/youtube_has_just_deleted_shkrelis_account/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0r2w9',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0r2w9/youtube_has_just_deleted_shkrelis_account/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0r2w9/youtube_has_just_deleted_shkrelis_account/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T03:34:09+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=3, tm_min=34, tm_sec=9, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': "Youtube has just deleted Shkreli's account",
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': "Youtube has just deleted Shkreli's account"}},
  {'authors': [{'name': '/u/Arag0ld',
     'href': 'https://www.reddit.com/user/Arag0ld'}],
   'author_detail': {'name': '/u/Arag0ld',
    'href': 'https://www.reddit.com/user/Arag0ld'},
   'href': 'https://www.reddit.com/user/Arag0ld',
   'author': '/u/Arag0ld',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>I know it&#39;s possible to link drives together in RAID, but if drives linked in RAID 0 are basically one drive, could I RAID 0 together say, 3 RAID 0 arrays?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Arag0ld"> /u/Arag0ld </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0xb38/raid_on_raid/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0xb38/raid_on_raid/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>I know it&#39;s possible to link drives together in RAID, but if drives linked in RAID 0 are basically one drive, could I RAID 0 together say, 3 RAID 0 arrays?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Arag0ld"> /u/Arag0ld </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0xb38/raid_on_raid/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0xb38/raid_on_raid/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0xb38',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0xb38/raid_on_raid/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0xb38/raid_on_raid/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T15:24:01+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=15, tm_min=24, tm_sec=1, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'RAID on RAID',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'RAID on RAID'}},
  {'authors': [{'name': '/u/binkarus',
     'href': 'https://www.reddit.com/user/binkarus'}],
   'author_detail': {'name': '/u/binkarus',
    'href': 'https://www.reddit.com/user/binkarus'},
   'href': 'https://www.reddit.com/user/binkarus',
   'author': '/u/binkarus',
   'tags': [{'term': 'DataHoarder', 'scheme': None, 'label': 'r/DataHoarder'}],
   'content': [{'type': 'text/html',
     'language': None,
     'base': 'https://www.reddit.com/r/DataHoarder.rss',
     'value': '<!-- SC_OFF --><div class="md"><p>A while ago, someone asked about recording the Hong Kong protest livestreams as they came online in case they were deleted or the VODs weren&#39;t saved. I commented and suggested using youtube-dl + polling.</p> <p>At this point, I didn&#39;t realize that Twitch streams weren&#39;t saved automatically and that people could delete them (including one of my favorite streamers, demolition_d, who occasionally deletes his streams while he&#39;s drunk).</p> <p>As such I actually found myself needing to use the tool myself. I&#39;ve been testing it for a week or so, and here is the result:</p> <p><strong><code>poll-live.sh</code></strong></p> <pre><code>while randsleep 1 3 clear youtube-dl --write-info-json --hls-use-mpegts --no-part https://www.twitch.tv/$(basename $PWD) end </code></pre> <p><strong>Notes:</strong></p> <ul> <li>Since I was organizing things by twitch username already, I set it to work off of <code>$(basename $PWD)</code>, which just means the current directory name. So just enter a directory like <code>demolition_d</code> and call <code>../poll-live.sh</code> or wherever you put it.</li> <li>The <code>--hls-use-mpegts</code> is relatively important, as it stores the data in a format that can be played while the stream is being downloaded, which is to use the <code>mpegts</code> format. This is as opposed to requiring a container format like <code>mp4</code> which requires the <code>moov</code> atom which is set at the end of the stream. You can encode this later into a nicer format once the ingestion is done, or not.</li> <li><code>--no-part</code> means don&#39;t add <code>.part</code> to the filename (e.g. <code>.mp4.part</code>) while downloading. This works in concert with <code>--hls-use-mpegts</code>, since with <code>mpegts</code>, it already is a fully functioning video file, meaning it isn&#39;t a partial.</li> <li><code>--write-info-json</code> is just something I always append so that I can get more metadata. If you&#39;ve never used this before, then look into the <code>man</code> pages for more information.</li> <li><code>randsleep 1 3</code> is a script I have written which sleeps a random amount of time between 1000ms and 3000ms. This is a pre-emptive measure to avoid being flagged by any DDOS protection the twitch.tv server firewalls might have by having a more random access pattern. I&#39;m not sure if it&#39;s necessary, but I definitely know of firewall protections for regular request patterns, so it doesn&#39;t hurt.</li> <li>The sleep granularity means that at most you may miss the first 3 seconds of the stream, but that&#39;s a range I&#39;m comfortable with.</li> <li><code>clear</code> just clears the terminal that I run this from in <code>tmux</code>.</li> </ul> <p>Obviously, this particular script is unix specific, but the principles work for any system.</p> <p>I can say that this system works splendidly so far. This in concert with a script I wrote to download a channel&#39;s VODs are my entire Twitch video archiving efforts so far.</p> <p>As far as the chat logs go, I do have twitch connected via IRC, so I just periodically save the IRC logs. This is something I plan to improve later.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/binkarus"> /u/binkarus </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0ozod/tested_capturing_twitch_livestreams_as_they_come/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0ozod/tested_capturing_twitch_livestreams_as_they_come/">[comments]</a></span>'}],
   'summary': '<!-- SC_OFF --><div class="md"><p>A while ago, someone asked about recording the Hong Kong protest livestreams as they came online in case they were deleted or the VODs weren&#39;t saved. I commented and suggested using youtube-dl + polling.</p> <p>At this point, I didn&#39;t realize that Twitch streams weren&#39;t saved automatically and that people could delete them (including one of my favorite streamers, demolition_d, who occasionally deletes his streams while he&#39;s drunk).</p> <p>As such I actually found myself needing to use the tool myself. I&#39;ve been testing it for a week or so, and here is the result:</p> <p><strong><code>poll-live.sh</code></strong></p> <pre><code>while randsleep 1 3 clear youtube-dl --write-info-json --hls-use-mpegts --no-part https://www.twitch.tv/$(basename $PWD) end </code></pre> <p><strong>Notes:</strong></p> <ul> <li>Since I was organizing things by twitch username already, I set it to work off of <code>$(basename $PWD)</code>, which just means the current directory name. So just enter a directory like <code>demolition_d</code> and call <code>../poll-live.sh</code> or wherever you put it.</li> <li>The <code>--hls-use-mpegts</code> is relatively important, as it stores the data in a format that can be played while the stream is being downloaded, which is to use the <code>mpegts</code> format. This is as opposed to requiring a container format like <code>mp4</code> which requires the <code>moov</code> atom which is set at the end of the stream. You can encode this later into a nicer format once the ingestion is done, or not.</li> <li><code>--no-part</code> means don&#39;t add <code>.part</code> to the filename (e.g. <code>.mp4.part</code>) while downloading. This works in concert with <code>--hls-use-mpegts</code>, since with <code>mpegts</code>, it already is a fully functioning video file, meaning it isn&#39;t a partial.</li> <li><code>--write-info-json</code> is just something I always append so that I can get more metadata. If you&#39;ve never used this before, then look into the <code>man</code> pages for more information.</li> <li><code>randsleep 1 3</code> is a script I have written which sleeps a random amount of time between 1000ms and 3000ms. This is a pre-emptive measure to avoid being flagged by any DDOS protection the twitch.tv server firewalls might have by having a more random access pattern. I&#39;m not sure if it&#39;s necessary, but I definitely know of firewall protections for regular request patterns, so it doesn&#39;t hurt.</li> <li>The sleep granularity means that at most you may miss the first 3 seconds of the stream, but that&#39;s a range I&#39;m comfortable with.</li> <li><code>clear</code> just clears the terminal that I run this from in <code>tmux</code>.</li> </ul> <p>Obviously, this particular script is unix specific, but the principles work for any system.</p> <p>I can say that this system works splendidly so far. This in concert with a script I wrote to download a channel&#39;s VODs are my entire Twitch video archiving efforts so far.</p> <p>As far as the chat logs go, I do have twitch connected via IRC, so I just periodically save the IRC logs. This is something I plan to improve later.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/binkarus"> /u/binkarus </a> <br/> <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0ozod/tested_capturing_twitch_livestreams_as_they_come/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/DataHoarder/comments/d0ozod/tested_capturing_twitch_livestreams_as_they_come/">[comments]</a></span>',
   'id': 'https://www.reddit.com/r/t3_d0ozod',
   'guidislink': True,
   'link': 'https://www.reddit.com/r/DataHoarder/comments/d0ozod/tested_capturing_twitch_livestreams_as_they_come/',
   'links': [{'href': 'https://www.reddit.com/r/DataHoarder/comments/d0ozod/tested_capturing_twitch_livestreams_as_they_come/',
     'rel': 'alternate',
     'type': 'text/html'}],
   'updated': '2019-09-07T00:16:55+00:00',
   'updated_parsed': time.struct_time(tm_year=2019, tm_mon=9, tm_mday=7, tm_hour=0, tm_min=16, tm_sec=55, tm_wday=5, tm_yday=250, tm_isdst=0),
   'title': 'Tested: Capturing Twitch livestreams as they come online by polling with youtube-dl',
   'title_detail': {'type': 'text/plain',
    'language': None,
    'base': 'https://www.reddit.com/r/DataHoarder.rss',
    'value': 'Tested: Capturing Twitch livestreams as they come online by polling with youtube-dl'}}],
 'bozo': 0,
 'headers': {'Content-Type': 'application/atom+xml; charset=UTF-8',
  'x-ua-compatible': 'IE=edge',
  'x-frame-options': 'SAMEORIGIN',
  'x-content-type-options': 'nosniff',
  'x-xss-protection': '1; mode=block',
  'set-cookie': 'session_tracker=s3yvbGcxrzYGyFzTjj.0.1567902713309.Z0FBQUFBQmRkRXY1U2ZXRjBQNE1FZVM4QWZrd3J5dld0VnJndl95Y0k1ZFZic28xQnBKYTY1NGU3YXUxYXl4WkJjVC05NHQ5OGo5ZTBUOVFWdUlTZWM0Z3pPR2k1WmRJQkpDSUtGSzY3M1NPUEFjcWVtczNlOVNsaUNVcHNNbzI1OVRDa3ZwTVZxN0E; Domain=reddit.com; Max-Age=7199; Path=/; expires=Sun, 08-Sep-2019 02:31:53 GMT; secure',
  'cache-control': 'max-age=0, must-revalidate',
  'X-Moose': 'majestic',
  'Content-Length': '48770',
  'Accept-Ranges': 'bytes',
  'Date': 'Sun, 08 Sep 2019 00:31:53 GMT',
  'Via': '1.1 varnish',
  'Connection': 'close',
  'X-Served-By': 'cache-iah17233-IAH',
  'X-Cache': 'MISS',
  'X-Cache-Hits': '0',
  'X-Timer': 'S1567902713.271673,VS0,VE335',
  'Set-Cookie': 'session_tracker=s3yvbGcxrzYGyFzTjj.0.1567902713309.Z0FBQUFBQmRkRXY1U2ZXRjBQNE1FZVM4QWZrd3J5dld0VnJndl95Y0k1ZFZic28xQnBKYTY1NGU3YXUxYXl4WkJjVC05NHQ5OGo5ZTBUOVFWdUlTZWM0Z3pPR2k1WmRJQkpDSUtGSzY3M1NPUEFjcWVtczNlOVNsaUNVcHNNbzI1OVRDa3ZwTVZxN0E; Domain=reddit.com; Max-Age=7199; Path=/; expires=Sun, 08-Sep-2019 02:31:53 GMT; secure',
  'Strict-Transport-Security': 'max-age=15552000; includeSubDomains; preload',
  'Server': 'snooserv'},
 'href': 'https://www.reddit.com/r/DataHoarder.rss',
 'status': 200,
 'encoding': 'UTF-8',
 'version': 'atom10',
 'namespaces': {'': 'http://www.w3.org/2005/Atom'}}
In [4]:
feed_title = feed['feed']['title']
In [5]:
feed_title
Out[5]:
"It's A Digital Disease!"

Great place to start.

WELCOME TO THE POST: DAY 009

What the heck is happening up there? Does anyone read .rss? I admittedly don't. I know I see the buttons at the bottom of articles sometimes but I would personally say that the most social sharing of content I participate in is stuff I see on Twitter and whatever part of social interaction is part of considering what's on my feed on Youtube.

Anyway..

I kind of figured out one goal that I want to work towards and I know what that's going to be.

I want to do some friggin parsing.

What's parsing?

(I'm secretly inserting techy vocab words into your concious)

It's taking that mumbo jumbo up there and making sense of it. I wouldn't go so far as to say it is translation but it's a data signal and parsing is a technique/tool you build to understand it.

But to answer the ORIGINAL question: parsing is what you do to understand it. it's THE VERB. ya know?

warning. this post will probably get long.

I'm going to construct my owwwwn parser. And try to learn how to do regex at the same time.

ARE YOU NOT ENTERTAINED?

omg what is regex. another effing vocab word. it's like the holy grail of all parsers but you seriously can't build a parser with out it right? Like. omg.

Basically regex is a way to understand something from whatever kind of text ever.


Also here are a bunch of other parsers. Looks like a common project:
(I'm going to start sharing links that you can copy and paste so that you don't get into the habit of just CLICKING ON LINKS. Don't just click on stuff. That's yucky. Also know that I'm not judging you if you do do that but I also I literally just said "do do" so..you decide)
https://pypi.org/search/?c=Topic+%3A%3A+Text+Processing



No more tell. Here's it is:

In [6]:
import re # yes Rhianna. See - all rap/hip hop IS programming.
from bs4 import BeautifulSoup # yeah that's hilarious. import bs.
In [7]:
# mid post thought: I kind of want to write short halloween scary stories. 
# I might hide them..
In [8]:
end_style_tag = re.compile(r'</style>')
In [9]:
with open("Untitled1.html") as html_file:
    soup = BeautifulSoup(html_file)

I might literally write this program line by line so keep that in mind as you're reading this. I'll try to step through my thinking but right now what I want to do is find where the html begins for a jupyter notebook file (the enivornment that I use to program and you should too if you program in python) that has been converted into an html file.

it's a body of text. that's it.</p></blockquote>

I had this written yesterday. Here's the problem with same day posts in programming. Most of what I am doing on a daily basis is like exploration if I really want to get some real learning done. Most of what I do doesn't yield results like most developers and I'm kind of okay with that. Pulling data is more important to me.

What I was actually trying to find out here was this:

when I am done writing these blog posts I go back into my terminal/command line/command prompt whatever you know it as, I type jupyter nbconvert this_note_book_file.ipynb and then I move the .ipynb file that is linked in the .gitignore file:~:

(heh I'm not sure you can see that on mobile.

anyway. I'll test it out.)


just so everyone is aware of this - there are certain features that I have implemented with the desktop version of this site that I am not able to reproduce when it comes to mobile and the reverse is true as well which totally ruins the balance of experiencing my website


:~: (see the tethers?) then there is a .html version of this file that is named the title of this page which I put into the blog/ folder which is why the url to this page is mctopherganesh.com/blog/why_does_posting_take_so_long.html

Alrighty. Day 10 post to come. More on Scrapy and why it's lit.

In [10]:
soup.p
Out[10]:
<p>Great place to start.</p>