So I wrote a shell script that does this job for me, the idea is to first use SVN Diff summarise option with grep and awk to find out the file names where text changes are present, then iterate over this list to generate SVN diff and append that to a file to generate the final diff.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
#!/bin/bash # Call this script to generate clean svn diff url1=$1 url2=$2 #If URLs don't end with /, add them if [ "${url1#${url1%?}}" != "/" ]; then url1=$url1"/" fi if [ "${url2#${url2%?}}" != "/" ]; then url2=$url2"/" fi echo $url1 echo $url2 #Get the length of url1, will be used later on length1=${#url1} # Below command will get all the files with text changes and store them in files variable files=$(svn diff $url1 $url2 --summarize | grep -v '^ ' | awk '{print substr($0,9)}') #Generate diff for each of the files and save it in a new file for f in $files do f2=${url2}${f:${length1}} $(svn diff $f $f2 >> svn_diff_clean.txt) echo "processed $f and $f2" done echo "done" |
When I execute the above script by passing two URLs, it generates svn_diff_clean.txt having full-text diffs. I used this script to generate diff in my project and the number of lines in output was reduced by almost 50%. However, it depends on project structure and metadata usage.
Tip: If you need to give a password every time while running SVN commands, try to delete the .subversion folder from home directory and then run it, it will ask password only for first time and then use the cached password.
The script doesn’t handle bad URLs or null inputs, but I am too lazy to do that. 🙂