For normal people who just read stuff on the internet my expectations of reading comprehension is not that high.
For peer scientists and magazines that would publish science though.
A school teacher would catch all of these during grading.
Ⓐ☮☭
For normal people who just read stuff on the internet my expectations of reading comprehension is not that high.
For peer scientists and magazines that would publish science though.
A school teacher would catch all of these during grading.
This should be top comment, the researchers did such a good job to make sure anyone with even the slightest reading comprehension would realise this is parody.
Regardless of that, the internet has always been full of lies and we cannot expect bad actors to not exploit this.


Image? The entire site feels ai guided.
I have never see this movie but could instanly hear the classic blunders line in the voice of Grand Nagus Zek.
Did they reference that line in ds9?
Since the meme the other day that showed the difference between eyes-to-the-side prey and eyes-to-the-front predator i can no longer unsee it!
This is gonna be controversial but to give this another perspective…
A long time ago me and my buddy role-played racist police in gta (not against other real people though) we are generally into really dark humor, the more evil and horrible the joke the bigger the contrast with how we actually are on the inside and we laugh at the absurdity of that which makes the experience funny while the joke itself never is.
But context is everything, we are about as anti racist as it gets irl, roleplaying the absurdity of racist police it is a way to process that this fucked up thing happens in our reality.
It affirms what we already know, racist behaviour is not just wrong but also incredibly stupid and there is no reasonable explanation on why a person would be like that besides some irrational and dangerous mental illness.
There is a story that gets passed down in my family from the time a pocket watch was considered luxury.
Allegedly to know whether it was time for a break, end of work. The boss would just look at his watch and say nope. Without anyone able to verify.
So my ancestor saved up their money and bought a silver pocketwatch to call the boss out when he cheated, a fancy one presumably because a cheap one could cause an argument that it doesn’t keep time as well as the one the boss had (it might have been the exact same one as the boss idr)
We still own the watch.
Unironically this is my strategy playing chess. L
Id much prefer this series was focused on a non trademarked version and was telling new stories with new rather than recycle Rowlings tale.
But they didn’t make that series, they made the one thats coming. As much as i hate i do recognise they had resources to do new things that i would like to know about.
Besides the worst witch and that one Netflix anime i have not seen much tv that explores this setting.
Its the same with the hogwarts legacy game, sure some indie titles explore similar worlds but there is only one that explores such a big world as a beautiful 3d rpg.

Absolutely Fair, they are quite a major source in the accelerated enshitification of modern life, thats why I provided examples so people can still learn without one.
But it would also be ignorant for me to not recognise how much i managed to learn about linux/open source from these same tools in the last few years. The traditional ways of learning things were never compatible with my personal neurology.
Without llms, id probably still be stuck on windows.
I recon this is a fringe opinion but i would much prefer embrace and transform the fandom into something explicitly inclusive and progressive, many aspects of the wizarding world have so much whimsical potential to explore human expression and identity. This is also why so many (ex) potterheads are queer.
I reject Rowling as the creator, most of it builds on pre existing ideas (The worst witch, existing folklore). All she really did was stumble on a good mix and then copyrighted it.
The fandom took that mix and have expanded it much further then Rowlings tiny brain can handle and it brought them together, i hate to lose what we had because of some corporate leech that sucks money out of it.
Now about this series, obviously she is going to profit from any profit its gets, so giving them profit is unethical, likewise hyping up the show without a critical perspective is also bad because others may then buy it or merch.
But it’s still that same mix of potential. The people who make the show may not all agree with Rowlings and reflect their own visions into it. Just like the original cast distance itself from her and also managed to project more than Rowling could even comprehend that world could contain. It’s at least worth a pirated watch to celebrate what it could be, while holding a critical perspective of the flaws it will certainly have.
Hatsune Miku wrote Harry Potter
There is always piracy though.


Scam site?
Url tunnels you trough different sites before ending on a fullscreen foced yt looking video that you cannot click away

Yes and no,
Yes because i am doing it, no because it’s just one part of the process.
Newpipe is cool but it doesn’t run on my phone so i needed something else.
You may have heard of plex, “run your own netflix”, i much prefer its competitor jellyfin but that doesn’t matter here.
Point is i download my YouTube videos on a schedule/script straight to the library folder of jellyfin, from which i can login from any type of device.

part 2
# ========================================================================
# Step 4 (Pass 1): Download at best quality, with a size cap
# ========================================================================
# Tries: best AVC1 video + best M4A audio → merged into .mp4
# If a video exceeds MAX_FILESIZE, its ID is saved for the fallback pass.
# Members-only and premiere errors cause the video to be permanently skipped.
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pass 1: best quality under $MAX_FILESIZE"
yt-dlp \
"${common_opts[@]}" \
--match-filter "!is_live & !was_live & original_url!*=/shorts/" \
--max-filesize "$MAX_FILESIZE" \
--format "bestvideo[vcodec^=avc1]+bestaudio[ext=m4a]/best[ext=mp4]/best" \
"$URL" 2>&1 | while IFS= read -r line; do
echo "$line"
if echo "$line" | grep -q "^ERROR:"; then
# Too large → save ID for pass 2
if echo "$line" | grep -qi "larger than max-filesize"; then
vid_id=$(echo "$line" | grep -oP '(?<=\[youtube\] )[a-zA-Z0-9_-]{11}')
[[ -n "$vid_id" ]] && echo "$vid_id" >> "$SCRIPT_DIR/.size_failed_$Name"
# Permanently unavailable → skip forever
elif echo "$line" | grep -qE "members only|Join this channel|This live event|premiere"; then
vid_id=$(echo "$line" | grep -oP '(?<=\[youtube\] )[a-zA-Z0-9_-]{11}')
if [[ -n "$vid_id" ]]; then
if ! grep -q "youtube $vid_id" "$skip_file" 2>/dev/null; then
echo "youtube $vid_id" >> "$skip_file"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Added $vid_id to skip file (permanent failure)"
fi
fi
fi
log_error "[$(date '+%Y-%m-%d %H:%M:%S')] ${Name} - ${URL}: $line"
fi
done
# ========================================================================
# Step 5 (Pass 2): Retry oversized videos at lower quality
# ========================================================================
# For any video that exceeded MAX_FILESIZE in pass 1, retry at 720p max.
# If it's STILL too large, log the actual size and skip permanently.
if [[ -f "$SCRIPT_DIR/.size_failed_$Name" ]]; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pass 2: lower quality fallback for oversized videos"
while IFS= read -r vid_id; do
[[ -z "$vid_id" ]] && continue
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Retrying $vid_id at 720p max"
yt-dlp \
--proxy "$PROXY" \
--download-archive "$archive_file" \
--extractor-args "youtube:player-client=default,-tv_simply" \
--write-thumbnail \
--convert-thumbnails jpg \
--add-metadata \
--embed-thumbnail \
--merge-output-format mp4 \
--max-filesize "$MAX_FILESIZE" \
--format "bestvideo[vcodec^=avc1][height<=720]+bestaudio[ext=m4a]/bestvideo[height<=720]+bestaudio[ext=m4a]/best[height<=720]/worst" \
--output "$DOWNLOAD_DIR/${Name} - %(title)s.%(ext)s" \
"https://www.youtube.com/watch?v=%24vid_id" 2>&1 | while IFS= read -r line; do
echo "$line"
if echo "$line" | grep -q "^ERROR:"; then
# Still too large even at 720p — give up and log the size
if echo "$line" | grep -qi "larger than max-filesize"; then
filesize_info=$(yt-dlp \
--proxy "$PROXY" \
--extractor-args "youtube:player-client=default,-tv_simply" \
--simulate \
--print "%(filesize,filesize_approx)s" \
"https://www.youtube.com/watch?v=%24vid_id" 2>/dev/null)
if [[ "$filesize_info" =~ ^[0-9]+$ ]]; then
filesize_gb=$(echo "scale=1; $filesize_info / 1073741824" | bc)
size_str="${filesize_gb}GB"
else
size_str="unknown size"
fi
if ! grep -q "youtube $vid_id" "$skip_file" 2>/dev/null; then
echo "youtube $vid_id" >> "$skip_file"
log_error "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Skipped $vid_id - still over $MAX_FILESIZE at 720p ($size_str)"
fi
fi
log_error "[$(date '+%Y-%m-%d %H:%M:%S')] ${Name} - ${URL}: $line"
fi
done
done < "$SCRIPT_DIR/.size_failed_$Name"
rm -f "$SCRIPT_DIR/.size_failed_$Name"
else
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pass 2: no oversized videos to retry"
fi
# Clean up any stray .description files yt-dlp may have left behind
find "$DOWNLOAD_DIR" -name "${Name} - *.description" -type f -delete
done

There is no single stop for a tutorial for stuff like this because you could use any scripting language and which ones you have available may depend on your os.
But honestly any half decent llm can generate something that works for your specific case.
If you really want to avoid using those,
Here is a simple example for windows powershell.
# yt-dlp Channel Downloader
# --------------------------
# Downloads the latest video from each channel in channels.txt
#
# Setup:
# 1. Install yt-dlp: winget install yt-dlp
# 2. Install ffmpeg: winget install ffmpeg
# 3. Create channels.txt next to this script, one URL per line:
# https://www.youtube.com/@SomeChannel
# https://www.youtube.com/@AnotherChannel
# 4. Right-click this file → Run with PowerShell
# Read each line, skip blanks and comments (#)
foreach ($url in Get-Content ".\channels.txt") {
$url = $url.Trim()
if ($url -eq "" -or $url.StartsWith("#")) { continue }
Write-Host "`nDownloading latest from: $url"
yt-dlp --playlist-items 1 --merge-output-format mp4 --no-overwrites `
-o "downloads\%(channel)s\%(title)s.%(ext)s" $url
}
Write-Host "`nDone."
And here is my own bash script (linux) which has only gotten bigger with more customization over the years.
(part 1, part 2 in the next reply)
#!/bin/bash
# ============================================================================
# yt-dlp Channel Downloader (Bash)
# ============================================================================
#
# Automatically downloads new videos from a list of YouTube channels.
#
# Features:
# - Checks RSS feeds first to avoid unnecessary yt-dlp calls
# - Skips livestreams, premieres, shorts, and members-only content
# - Two-pass download: tries best quality first, falls back to 720p
# if the file exceeds the size limit
# - Maintains per-channel archive and skip files so nothing is
# re-downloaded or re-checked
# - Embeds thumbnails and metadata into the final .mp4
# - Logs errors with timestamps
#
# Requirements:
# - yt-dlp (https://github.com/yt-dlp/yt-dlp)
# - ffmpeg (for merging video+audio and thumbnail embedding)
# - curl (for RSS feed fetching)
# - A SOCKS5 proxy on 127.0.0.1:40000 (remove --proxy flags if not needed)
#
# Channel list format (Channels.txt):
# The file uses a simple key=value block per channel, separated by blank
# lines. Each block has four fields:
#
# Cat=Gaming
# Name=SomeChannel
# VidLimit=5
# URL=https://www.youtube.com/channel/UCxxxxxxxxxxxxxxxxxx
#
# Cat Category label (currently unused in paths, available for sorting)
# Name Short name used for filenames and archive tracking
# VidLimit How many recent videos to consider per run ("ALL" for no limit)
# URL Full YouTube channel URL (must contain the UC... channel ID)
#
# ============================================================================
export PATH=$PATH:/usr/local/bin
# --- Configuration ----------------------------------------------------------
# Change these to match your environment.
SCRIPT_DIR="/path/to/script" # Folder containing this script and Channels.txt
ERROR_LOG="$SCRIPT_DIR/download_errors.log"
DOWNLOAD_DIR="/path/to/downloads" # Where videos are saved
MAX_FILESIZE="5G" # Max file size before falling back to lower quality
PROXY="socks5://127.0.0.1:40000" # SOCKS5 proxy (remove --proxy flags if unused)
# --- End of configuration ---------------------------------------------------
cd "$SCRIPT_DIR"
# ============================================================================
# log_error - Append or update an error entry in the error log
# ============================================================================
# If an entry with the same message (ignoring timestamp) already exists,
# it replaces it so the log doesn't fill up with duplicates.
#
# Usage: log_error "[2025-01-01 12:00:00] ChannelName - URL: ERROR message"
log_error() {
local entry="$1"
# Strip the timestamp prefix to get a stable key for deduplication
local key=$(echo "$entry" | sed 's/^\[[0-9-]* [0-9:]*\] //')
local tmp_log=$(mktemp)
if [[ -f "$ERROR_LOG" ]]; then
grep -vF "$key" "$ERROR_LOG" > "$tmp_log"
fi
echo "$entry" >> "$tmp_log"
mv "$tmp_log" "$ERROR_LOG"
}
# ============================================================================
# Parse Channels.txt
# ============================================================================
# awk reads the key=value blocks and outputs one line per channel:
# Category Name VidLimit URL
# The while loop then processes each channel.
awk -F'=' '
/^Cat/ {Cat=$2}
/^Name/ {Name=$2}
/^VidLimit/ {VidLimit=$2}
/^URL/ {URL=$2; print Cat, Name, VidLimit, URL}
' "$SCRIPT_DIR/Channels.txt" | while read -r Cat Name VidLimit URL; do
archive_file="$SCRIPT_DIR/DLarchive$Name.txt" # Tracks successfully downloaded video IDs
skip_file="$SCRIPT_DIR/DLskip$Name.txt" # Tracks IDs to permanently ignore
mkdir -p "$DOWNLOAD_DIR"
# ========================================================================
# Step 1: Check the RSS feed for new videos
# ========================================================================
# YouTube provides an RSS feed per channel at a predictable URL.
# Checking this is much faster than calling yt-dlp, so we use it
# as a quick "anything new?" test.
# Extract the channel ID (starts with UC) from the URL
channel_id=$(echo "$URL" | grep -oP 'UC[a-zA-Z0-9_-]+')
rss_url="https://www.youtube.com/feeds/videos.xml?channel_id=%24channel_id"
# Fetch the feed and pull out all video IDs
new_videos=$(curl -s --proxy "$PROXY" "$rss_url" | \
grep -oP '(?<=<yt:videoId>)[^<]+')
if [[ -z "$new_videos" ]]; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] RSS fetch failed or empty, skipping"
continue
fi
# Compare RSS video IDs against archive and skip files.
# If every ID is already known, there's nothing to do.
has_new=false
while IFS= read -r vid_id; do
in_archive=false
in_skip=false
[[ -f "$archive_file" ]] && grep -q "youtube $vid_id" "$archive_file" && in_archive=true
[[ -f "$skip_file" ]] && grep -q "youtube $vid_id" "$skip_file" && in_skip=true
if [[ "$in_archive" == false && "$in_skip" == false ]]; then
has_new=true
break
fi
done <<< "$new_videos"
if [[ "$has_new" == false ]]; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] No new videos, skipping"
continue
fi
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] New videos found, processing"
# ========================================================================
# Step 2: Build shared option arrays
# ========================================================================
# Playlist limit: restrict how many recent videos yt-dlp considers
playlist_limit=()
if [[ $VidLimit != "ALL" ]]; then
playlist_limit=(--playlist-end "$VidLimit")
fi
# Options used during --simulate (dry-run) passes
sim_base=(
--proxy "$PROXY"
--extractor-args "youtube:player-client=default,-tv_simply"
--simulate
"${playlist_limit[@]}"
)
# Options used during actual downloads
common_opts=(
--proxy "$PROXY"
--download-archive "$archive_file"
--extractor-args "youtube:player-client=default,-tv_simply"
--write-thumbnail
--convert-thumbnails jpg
--add-metadata
--embed-thumbnail
--merge-output-format mp4
--output "$DOWNLOAD_DIR/${Name} - %(title)s.%(ext)s"
"${playlist_limit[@]}"
)
# ========================================================================
# Step 3: Pre-pass — identify and skip filtered content
# ========================================================================
# Runs yt-dlp in simulate mode twice:
# 1. Get ALL video IDs in the playlist window
# 2. Get only IDs that pass the match-filter (no live, no shorts)
# Any ID in (1) but not in (2) gets added to the skip file so future
# runs don't waste time on them.
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pre-pass: identifying filtered videos (live/shorts)"
all_ids=$(yt-dlp "${sim_base[@]}" --print "%(id)s" "$URL" 2>/dev/null)
passing_ids=$(yt-dlp "${sim_base[@]}" \
--match-filter "!is_live & !was_live & original_url!*=/shorts/" \
--print "%(id)s" "$URL" 2>/dev/null)
while IFS= read -r vid_id; do
[[ -z "$vid_id" ]] && continue
grep -q "youtube $vid_id" "$archive_file" 2>/dev/null && continue
grep -q "youtube $vid_id" "$skip_file" 2>/dev/null && continue
if ! echo "$passing_ids" | grep -q "^${vid_id}$"; then
echo "youtube $vid_id" >> "$skip_file"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Added $vid_id to skip file (live/short/filtered)"
fi
done <<< "$all_ids"

Its an open source tool to download youtube videos
About every mainstream youtube download program you or your parents have ever used are actually just a wrapper for this.
Bonus: If you want to learn more about coding its not that hard to make a script that automatically downloads the last video from a list of channels that runs on a schedule. Even ai can do it.
This and just act mature about it with a critical judgement.
Regardless what Rowling says the wizarding world, which is a mix of common mythology and has folklore huge queer potential. Its hard to find media that portrays something like it this well.
The series will have its flaws, the writer will make sure of that, but she will also be too shortsighted to recognise the extension and personal touched by everyone else involved that may not agree with her viewpoints.