Shell scripts for day to day use...


I like Z for Jump list (quickly jump to directories I frequent):
its a bit heavyweight for the Pandora, so there I just use an alias (or browse with tunar and drag and drop the directory to bash).
If you use zshell, instead use this
If you are on Windows, use this
I like V too.
I also had used same authors epub reader, but later moved to epr which also support images.
 
It's not really a script, but I've just recently moved from using a "bare" git repository to manage my dotfiles, to using GNU Stow. The benefit is that my dotfiles are now organised in folders by application name, and it's easier to find / edit or add to them.


Steps, in short (example using zsh)

Bash:
mkdir -p ~/dotfiles/zsh
mv ~/.zshrc ~/dotfiles/zsh
cd ~/dotfiles && stow zsh

The final step creates a symlink from ~/dotfiles/zsh/.zshrc to ~/.zshrc
Post automatically merged:

Also, if I need to quickly skip branches, I often use a temp "work in progress" commit to avoid stashing issues:

Bash:
git-wip-resume () { 
        is_wip=`git log -1 --oneline | grep -P "\bwip$" -o` 
        if [ "$is_wip" = "wip" ] 
        then  
                echo "Reverting to previous work in progress"   
                git reset --soft HEAD~1 
        fi  
}

git-wip () {
        git commit -am "wip"
}
 
I didn't knew about stow, but I just organized my dotfiles using it. This will be useful when I will migrate my dotfiles to pyra. Hopefully soon :)
 
Last edited:
Another script I like and use routinely for playing videos on kodi from my laptop ( sending youtube links, or playing local videos from laptop to kodi running in pi connected to tv)
send_to_kodi
 
  • Like
Reactions: rSl
A script that will check for duplicated files in a specified folder on your computer. You can use it to deduplicate any kind of user data, such as old game ROMs that you inherited from various old sources, your porn folder, or really any kind of data in folders that's purely for your own use. I've saved a couple gig from my rather full server, which is a welcome relief.

Code:
#!/usr/bin/env bash

function normalisepath {
    # (i.e. eliminate /../ in the middle of supplied paths)
    local better=$1
    better=${better//\/.\//\/}
    while [[ $better =~ ([^/][^/]*/\.\./) ]]
    do
        better=${better/${BASH_REMATCH[0]}/}
    done
    echo "$better" # Return path
}

declare -a normpath
for path in "$@"
do
    normpath+=($(normalisepath "$path"))
done

declare normfiles
numfiles=0
for fil in "${normpath[@]}"
do
    normfiles+=`find "$fil" -type f`$'\n'
    let numfiles=numfiles+`echo "$normfiles"|wc -l`
done

echo "Getting file sizes for comparison"
declare sizes
declare num
while IFS= read -r fil; do
    if [ -n "$fil" ]
    then
        sizes+=`stat -c %s\ %n "$fil"`$'\n'
    fi
    let num=num+1
    let percent=num*100/numfiles
    echo -en "\r${percent}%"
done <<< $normfiles
echo -e "\rDone"

echo "Looking for duplicate file sizes"
declare dups
declare -a sizematches
dups=`echo -n "$sizes" | cut -f -1 -d " " | sort -g | uniq -d`
if [ ${#dups} -gt 1 ]
then
    while IFS= read -r dup; do
        sizematches+=("$dup")
    done <<< "$dups"
    numsizes=${#sizematches[@]}
    echo "Found "$numsizes" duplicate sizes"
else
    echo "Congratulations: no size matches found"
    exit 0
fi

declare -a files
ret=0
num=0
for size in "${sizematches[@]}"
do
    let num=num+1
    echo -en "\r($num/$numsizes)\r"
    files=()
    while IFS= read -r fil; do
        files+=("$fil")
    done <<< `echo -n "$sizes" | grep "^${size}\s" | cut -f 2 -d " "`
    hashsize=1024
    while [ $hashsize -le $size -a ${#files[@]} -gt 0 ]; do
        hashes=""
        debug=0
        for fil in "${files[@]}"
        do
            if [[ $fil = *FD5E8uH.jpg* ]]; then
                debug=1
            fi
            hashes+=`echo -n "$fil"" " && head -c $hashsize "$fil"|sha512sum -`$'\n' # Space to put a gap between the filename and the hash
        done
        counthash=`echo -n "$hashes"" " | cut -f 2 -d " " | sort -g | uniq -c`
        duphash=`echo -n "${counthash//      }" | grep -x "\s*\([2-9]\|[[:digit:]][[:digit:]][[:digit:]]*\)\s.*" | cut -f 2 -d " "`
        files=()
        if [ ${#duphash} -ge 64 ]
        then
            while IFS= read -r fil; do
                files+=($fil)
            done <<< `echo -n "$hashes"|grep -F "$duphash" |cut -f 1 -d " "`
        fi
        let hashsize=2*hashsize
    done
    if [ ${#files[@]} -gt 0 ]; then
        # Files remain after hash loop; same so far
        hashes=""
        for fil in "${files[@]}"
        do
            hashes+="`sha512sum "$fil"`"$'\n'
        done
        duphash=`echo -n "$hashes"" " | cut -f 1 -d " " | sort -g | uniq -d`
        files=()
        if [ ${#duphash} -ge 64 ]
        then
            while IFS= read -r fil; do
                files+=($fil)
            done <<< `echo -n "$hashes" | grep -F "$duphash" | cut -f 2- -d " "`
        fi
        if [ ${#files[@]} -gt 0 ]; then
            echo "The following files seem to be duplicates:"
            for dup in "${files[@]}"
            do
                echo "$dup"
            done
            ret=2
        fi
    fi
done
if [ $ret -eq 0 ]; then
    echo "Congratulations; all files differ"
else
    echo -en "                  \r" # Clear out last number, in case you have a small prompt
fi
exit $ret
It fettles the paths you supply to split them into subdirectories if your collection forms lots of files, because otherwise it can be rather quiet when processing, which is not something I personally like. There's a number, maxsplit, defined near the top which might need fiddling with for your particular use. If you computer or disc is significantly faster than mine you might want to double it, and if you've big files it'll slow up the hashing so you might want to reduce it to make it noisier.
I've updated this now, to include the suggestions below. I've also rewritten the noisiness, so that now it doesn't split up your paths, but instead writes out temporary status lines. It's also a lot faster than it used to be, thanks to the suggested improvements. It could conceivably be made faster; currently it hashes the first 1k of each file, then the first 2k, 4k, 8k, 16k and so on; it might be faster to hash the first 1k, then the next 1k, and the next, but at least this way it does finally hashsum the entire file, to be safe.
 
Last edited:
I must admit, I didn't even consider md5 because for a number of years now, shasumming everything has been quick enough for me. I did check sha256sum versus 512 and to my surprise found it was quicker to do the full 512.

Edit: Just tried it, and it saved 20 seconds off an 93 second sha512sum run, and didn't flag up any false duplicates on my data either. I think I'll stick with sha512sum for the extra level of security it gives me still though.
 
Last edited:
A script that will check for duplicated files in a specified folder on your computer. You can use it to deduplicate any kind of user data, such as old game ROMs that you inherited from various old sources, your porn folder, or really any kind of data in folders that's purely for your own use. I've saved a couple gig from my rather full server, which is a welcome relief.

Code:
#!/usr/bin/env bash

maxsplit=800 # The maximum number of files this will process without trying to
# split the path into multiple subdirectories.

declare -a paths

function normalisepath {
    # (i.e. eliminate /../ in the middle of supplied paths)
    local better=$1
    better=${better//\/.\//\/}
    while [[ $better =~ ([^/][^/]*/\.\./) ]]
    do
        better=${better/${BASH_REMATCH[0]}/}
    done
    echo "$better"
}

function dividepath {
    local path=$1
    local maxdepth=$2
    if  [ $maxdepth -gt 0 ] &&
      [ `find "$path" -type f|wc -l` -gt $maxsplit ]
      then
        #  split it
        while IFS= read -r file
        do
            if [ -n "$file" ]
            then
                paths+=("$file")
            fi
        done <<< `find "$path" -maxdepth 1 -mindepth 1 -type f`
        while IFS= read -r path
        do
            if [ -n "$path" ]
            then
                dividepath "$path" $(($maxdepth - 1))
            fi
        done <<< `find "$path" -maxdepth 1 -mindepth 1 -type d`
    else
        paths+=("$path")
    fi
}

for path in "$@"
do
    path=$(normalisepath "$path")
    dividepath "$path" 2
done

declare hashes
for path in "${paths[@]}"
do
    if [ -d "$path" ]; then echo "Searching in "$path"..."; fi
    hashes+=`find "$path" -type f -exec stat -c %s {} \; -exec sha512sum {} \; | paste - - -d" "`
done

echo "Looking for dups..."
declare dups
dups=`echo -n "$hashes" | cut -f -2 -d " " | sort -g | uniq -d`
if [ ${#dups} -lt 64 ]
then
    echo "Well done, no duplicates found"
else
    while IFS= read -r dup; do
        if [ ${#dup} -ge 64 ]; then
            echo "The following files seem to be duplicates:"
            echo "$hashes"|grep -F "$dup"|cut -f 3- -d " "
        fi
    done <<< "$dups"
fi
It fettles the paths you supply to split them into subdirectories if your collection forms lots of files, because otherwise it can be rather quiet when processing, which is not something I personally like. There's a number, maxsplit, defined near the top which might need fiddling with for your particular use. If you computer or disc is significantly faster than mine you might want to double it, and if you've big files it'll slow up the hashing so you might want to reduce it to make it noisier.
vifm provides nice file compare utility command , to look for duplicates, one can use ":compare listdups ofone"

Now for script I want to post is a ffmpeg wrapper for cropping video. It allow cropping top/bottom/left/right using pixels, after which video is previewed using ffplay, and command is displayed to crop the video. This script doesn't change edit video.

Code:
#!/bin/sh

eval `ffprobe -loglevel quiet -show_entries stream=width,height "$1" | egrep -e '^(width|height)='`
test `expr match "$width" '^[0-9]*$'` -eq 0 -o `expr match "$height" '^[0-9]*$'` -eq 0 && {
    echo "Error: Could not get video resoloution of \"$1\" 
    Make sure the file is a media file with a video stream."
    exit 3
}

echo "video width is $width
video height is $height"

echo "pixel to crop from top" 
read topx
echo "pixel to crop from bottom" 
read bottomx
echo "pixel to crop from left" 
read leftx
echo "pixel to crop from right" 
read rightx


ffmpeg -i "$1" -filter:v "crop=iw-$leftx-$rightx:ih-$topx-$bottomx:0+$leftx:0+$topx" -c:a copy -f matroska  - | ffplay -loglevel quiet -

echo " use following command  to crop the video

ffmpeg -i "$1" -filter:v "crop=iw-$leftx-$rightx:ih-$topx-$bottomx:0+$leftx:0+$topx" -c:a copy -f matroska $1_cropped.mkv"
 
I'm currently using python scripting to scale my videos and drop the frame rate for my laptop, and to convert then to aac(lc) for playback on my TV. But your script looks significiantly simpler than my python code, although part of that it to cope with all of the different permutations of things you need to do, and if for example you don't need to transcode the video but do need to convert the audio, adding -c:v copy does speed things up dramatically.
 
  • Like
Reactions: rSl
filtering with -filter:v doesn't work together with codec copy, as cropped video needs re-encoding (even when using same codec)

Purpose of this script is to crop margins (scaling is different thing), of videos I record using obs-studio (mainly live lectures). For web upload and size reduction I use different scripts. To scale I do "-filter:v scale=-1:480", and if it is already low resolution "-crf 26" to improve compression.
 
Yes, -c:v copy is no use to you; you can't use it if you plan to change the video in any way. But in my case if the video turns out to be 1080p or less and the frame rate isn't too onerous, and the only problem is my TV refuses to play AAC(HC) audio, then I can just copy the video and transcode the audio. So yeah, that wasn't really in reply to you, more of an aside, I'm afraid.
 
A script that will check for duplicated files in a specified folder on your computer. You can use it to deduplicate any kind of user data, such as old game ROMs that you inherited from various old sources, your porn folder, or really any kind of data in folders that's purely for your own use. I've saved a couple gig from my rather full server, which is a welcome relief.
...

I'm computer illiterate so I don't know if your script already does this, but you should have it make a list of files with the same filesizes, and THEN get the hash of those files.

For more speed hackery, just hash the first xxx bytes of the file, and if they match, hash the whole thing.
 
That's true, at least the first idea. It makes the whole idea a bit more complicated, I'd have to do it in two stages. First get the all of the file sizes and then look for duplicates there, then of those files with duplicates, shasum them. I might do that if it ever becomes too slow to run on my data, but at present it's all done in less than a couple of minutes.

As for the second idea, it would be worthwhile if we were talking something entirely computer generated and not compressed. Maybe if it were checking uncropped uncompressed game isos, which tend to be packed with random garbage to make it hard to see when a small game is on a commercial CD, so they're all the same size. The only game roms I needed to unduplicate were old NES roms I'd inherited, so those never took very long at all to shasum, but in case anyone needs to unduplicate PS1 isos or later that's not too much of a stretch, and I could implement that. It would require a temp file to write the head of the file out to, but that's not impossible to make.
 
So, I've updated the hashdups script above. It's now a lot faster, My test data has grown somewhat since my last posting. My photos went from 3m08s down to 38s, and more impressively my videos archive went from 2m04s down to just 4.3s. Thanks, @NutNut ! No temp file needed by the way, I was able to crop the file and hash it all in one line.
 
Last edited:
This is menu to run your applications. You can run this from terminal or by keybinding. When run from terminal (vt or console), it uses fzf to provide selection. When run from window manager shortcut, it uses rofi (optionally you can use dmenu, by changing one line of code).

You can add your personal favorites to $HOME/.config/shortcuts, one per line. They will be displayed on top. For example you can have "chromium www.google.com" or "terminator -x nnn -Rfe" in your shortcuts (one per line).
Bash:
#!/bin/bash

shopt -s lastpipe
## make temporary file and copy all commands to it
f=$(mktemp /tmp/"${0##*/}".XXXXX)
cat $HOME/.config/shortcuts <(IFS=:; find $PATH -executable -printf "%f\n" | sort -r ) | awk '!NF || !x[$0]++' > "$f"

## choose command to execute using fzf (if run from terminal) or rofi (if run using window-manager shortcut)
if [[ $TERM = linux && -n $DISPLAY ]]; then
     rofi -dmenu -font "Noto Sans Mono Medium 18" -p "RUN THIS" -input $f | read command
else
    fzf --border --no-sort --reverse --height 10 --prompt "RUN THIS: " < $f | read command
fi
## cleanup at exit
clean_f() {
    [[ -f "$f" ]] && rm "$f"
}
trap clean_f EXIT
## run the command
exec $command &

EDIT: script updated, after some suggestions on archlinux forum
 
Last edited:
Back
Top