Raging Sloth

House is a mess but Beta almost here

So my house purchase is a mess right now. The owners didn't mention a few expensive safety related defects during the price negotiation and now they don't want to bother fixing them.

On the Beta front however I've switched to using pgmagick (an interface for GraphicsMagick) instead of PIL and everything is working. The colour related issues I saw with PIL are gone though for some reason the thumbnails don't seem as sharp. I think the convert utility is using resizing algorithms that aren't available through pgmagick as it only connects the C++ interface and not the C one. It is also possible that unsharp mask isn't working correctly (or perhaps at all). The speed difference is still there though I haven't done any large scale comparisons yet to confirm it is exactly the same I know that CPU utilization is minimal so I'm still bound by the upload process. I run unsharp mask with the same options that the Synology uploader does and I've even added in regular sharpening but it doesn't seem to be doing much. In the end though the colours are correct this time and I think I've reached the point of good enough. Anyway I've spent far too much time on this tonight so I'll try to clean it up and get it out to my volunteer testers tomorrow.
Comments

You know what makes it hard to concentrate on programming?

Negotiating the purchase of a house :) I'll try to get the Beta finalized soon though and hopefully have bought a house too.
Comments

Quick Update

Just as a quick update I'm out of town all this week with only my macbook. Unfortunately python modules are not all that easy to install on OSX… After a lot of tweaking and testing and obscure linker and compilation errors I'll hopefully get some stuff worked on tonight. I'd like to have a setup where normal people can easily install everything they need and run the app on any OS so I'll probably bundle everything together for mac so the various compilations won't be needed. Anyway I finally got pgmagick working so progress continues.
Comments

Beta Testing Update

So I noticed that the thumbnails I was getting were a little strange looking compared to the DSAssistant. Turns out the library I picked isn't embedding colour profiles so I'm going to swap it out for PythonMagick which is a python hook to the same library DSAssistant uses. I have been busy all week and will be next week if I don't get it done Sunday it might not be till early June.
Comments

Beta is done just Hopefully I'll be able to distribute tomorrow

It is late and the Beta is done. I timed one of my test runs and my code came in at 54 seconds (HTTPS) and 46 seconds (HTTP) compared to 2 minutes 8 seconds for the DSAssistant uploader (keep in mind the NAS was not under controlled conditions there may have been some time machine backups going on I'll need to reboot the NAS at some point and figure out just where I'm at performance wise). Of course there are a few things to consider. For one my code isn't performing unsharp mask while the DSAssistant is, though I have a bit of a beast of a machine and it seems to grind through the unsharp mask a lot faster than my old one did. As well the upload process seems to be pretty slow and since my code is parallel I'm thinking I can add unsharp mask almost for free time-wise. I have a few other things to look into to speed things up, for example right now all the thumbnails are generated in separate processes and then returned to the main process to be uploaded from one place. If I have each worker process upload its own images that might speed things up dramatically (most likely all the images are being sent over a socket back to the main process). I could also try just parallelizing the uploads to begin with but the NAS seems to be the weakest link in this process anyway so I might not get any additional speed. I guess I should also mention my approach can use HTTPS while the DSAssistant cannot (important if you want to upload from outside your home network since your password will be sent in the clear…) and my approach creates all the images in memory so no temporary files, oh and from what I can tell DSAssistant seems to be using the file modified property of the files to determine when they were taken while my approach checks for when the photo was created in the Exif data (this would explain why my photos never seem to be in the proper order when I sort them by date) and one last thing even if I can get the files to upload faster the NAS already can't keep up with the upload process so even after the upload it will be a while before PhotoStation is up to date if you have a large upload. Anyway I have work tomorrow and really have to get to bed. The week isn't a good time for me (work and all) but I'll try to get the uploader released ASAP.

BTW if anyone just got a NAS and wants to do a huge upload let me know in a comment.
Comments

So… Close...

Well, time for bed for tonight. I got a later start than I thought I would but I've got all the necessary UI done. It could use some usability work like progress bars and such and it currently loads the entire PhotoStation directory structure on startup which takes way too long but I have all the UI I need to get the work done. I just have to plug my thumbnail generation code into it now but first to bed. Hopefully I'll have a Beta up tomorrow (though I also have to figure out the best way to package it).
Comments

Progress report

So I have pretty much all the code I need for the PhotoStation portion and I'm about 25% of the way through the UI code to turn it into a useable product. Unfortunately it is now Sunday night and I have work tomorrow so this likely won't get finished for a few days. I'm doing the UI with wxGlade which is in general an awesome tool but really needs an undo button… You think you have a control selected and hit delete and then a whole window is gone…
Comments

Almost done of my third party PhotoStation uploader

So right now I have a script that will successfully upload a single file and save login information and such in an encrypted file accessible through a single password (hard to explain but took a good deal of time) this way you can use a simple password for the uploader and have a more complex password for the NAS saved so it isn't in cleartext on your hard drive. Right now the script just uploads the same file for all of the thumbnails and for the original file and only works on a single file at a specific location. So obviously I need to change it so it will accept any file or collection of files and combine it with the thumbnail generation code I already have (and adjust it a bit as there is no longer any need to save intermediate files to disk it can all be done from main memory.) So I need to use wireshark to sort out how things change when you upload more than one file and how to query directories and make new ones. I would probably have a totally working uploader script right now if I'd left the encryption out but what can I say I am security minded and I liked the idea of branching out into a different module for a while.
Comments

A Long time coming and almost there...

Thanks to a comment pointing out the use of a particular php file in the uploading of thumbnails to a Synology NAS by Louis Somers I'm hot on the trail of making a useful and easy to use alternative file uploader. Turns out that there are actually a few PHP files in the mix (it was driving me crazy trying to figure out how that one alone could do things but a little more digging into the file system and it looks like there are at least 5 server scripts involved.) I've already got super fast multithreaded python code to generate the thumbnails so once I can figure these scripts out it won't be long.

**Update - after a good deal of struggling I've come to realize a few things.
  1. There are two php files in use. One gives a list of files that are going to be transferred and the other is called for each thumbnail that is uploaded
  2. The file uploads are done with PUTs rather than POSTs to php files (this was really confusing when reading the php file and seeing references to files but nowhere where they are taken from the request…)
  3. Even though my NAS is setup to prefer HTTPS all of these transfers are done over unencrypted HTTP which means I spent a lot of time setting up proxies and things so I didn't have to mess with my settings to spy on the requests when the answer in the end was wireshark

**Update 2
  1. There are three php files in use because while you can use HTTPAuth to access files on the NAS you need to be specifically logged in to photo station to be able to do things (I've got this working)
  2. The second file sets up the transfer for a particular image (I've got this working)
  3. The third file is called multiple times with PUT. Once for each Thumbnail and once for the original file. (I have a test case for a single thumbnail working)
  4. In retrospect the original comment mentioned the use of PUT but I lost track of that in the confusion from the fact the system uses HTTP headers and accesses them through the _SERVER object when it really should be using parameters. This caused untold problems because I duplicated the wireshark output in setting my parameters including termination codes and the python requests library went all nuts on me when I included those codes… I spent like 3 hours debugging just to find out I had to delete \r\n from my header values...
  5. I'm done for the night but I now have all the basic code needed to make a third party uploader. I expect a working script tomorrow and a GUI to follow (not sure how long this will take as I haven't used a python GUI library in quite a while.)
Comments

Unfinised Script

So I was up late last night trying to get my script working again. Turns out the the script worked the whole time but since I upgraded my distro it turns out the the uploader simply doesn't work at all on my linux box. I don't have time to do any more testing or prep for other OSes until about a week from now but I've added the second phase script to the zip file for those that want to venture on with very little instruction. Basically the Synology assistant doesn't actually have any code in it to work on media files it is bundled with other utilities which it calls. The utility it uses to create thumbnails is convert (convert.exe on windows). So the second phase gets put in place of this file and rather than generating thumbnails it simply copies the thumbnails that were generated in phase 1 where Synology Assistant expects them to be. On Mac and Linux this should just work as long as you name the script convert with no extension, set execute permissions and replace the convert file bundled with Synology Assistant with the script (on Mac you'll have to right click on the .app file and hit show package contents to browse around). On windows it is more complicated. Windows doesn't treat scripts and executables the same from a shell perspective and the file is called convert.exe so an executable binary is expected. I haven't been able to test it but you can make an executable binary with py2exe just follow the tutorial here (obviously you'll need to install py2exe). I'm not sure if you'll run into any strange problems don't forget that a py2exe executable will need all the generated support files not just the convert.exe I'll get this sorted out in a week or two but I'm out for now. Good luck to anyone who attempts and remember I provide no warranty whatsoever the file is provided as is but I'm more than happy to respond to comments if there are any.

Notes:
  • Remember if you used the original script to change the folder with the thumbnails to customThumbs (remove the 2 if it is there.)
  • Make sure that on linux and OS X the file is convert not convert.py (a file manager might hide the extension on you.) and on Windows convert.exe made with py2exe
  • The script was originally supposed to use .customThumbs to hide the files and make the uploader ignore it but I thought that might be confusing for people who can't see hidden files. Anyway I left everything at customThumbs but the assistant thinks the folders are jpegs… I can fix this pretty easy when I get back by changing the extension of the folders too anyway you're going to get a bunch of "couldn't generate thumbnail errors" while it uploads a bunch of empty folders in all the customThumb folders but that is because those are folders and not files just let the system run its course and go to your photo share and delete all the customThumbs folders at the end. I don't have time to retype this post but in retrospect that is probably what happened on my linux box… Hours spent debugging…
  • If you want the convert script to run even faster you can change the shutil.copy to a shutil.move near the end of the convert script. This will only speed things up if your photos are on the same hard drive as where Synology Assistant stores temporary files ( /tmp on linux and probably on OS X but I haven't checked Windows\temp on windows.) This way the thumbnails don't have to be copied but they will be deleted by Synology Assistant (I'm guessing you probably don't care :) )
  • This is all the time I have to explain for now. Best of luck to any who attempt.
Comments

Fix PhotoStation Beta update

So in case anyone who is waiting on this hasn't noticed I updated the instructions below as I forgot and thought that PIL came with the standard python distro. Anyway the hold up with the second step is that my windows machine is being rather unsupportive. I had a hard drive corruption issue yesterday and since the reinstall it has not been shy about blue screens. The method I have already works on Unix-like OSes but I need to convert the script to an exe file for it to work on windows. So I either need to export it with py2exe or I need to write the same script in C++ and compile it for windows. I was going to do the latter but I don't think my system will be stable enough for development work for a while (formatting a 2TB drive right now and the system is pretty touchy about doing other things at the same time.) So I went with Py2exe and I'm just waiting to do some testing before releasing stage 2. Should be tomorrow at the latest.
Comments

Fix PhotoStation Beta

I'll update this post when I have more. I'm helping someone on windows get their photos uploaded and this is the first stage in the process. Here are the instructions for this part:
  1. Install python on whatever OS you like (this stage works on any OS or at least should :) );
  2. Install the Python imaging library (forgot you had to do this separate so if you tried the script and it didn't work this is probably what's wrong.) http://www.pythonware.com/products/pil/
  3. Take the photos that you have and put them in a directory by themselves. It is ok to have subfolders. symbolic and hard links are a bad idea;
  4. Download the script here;
  5. Run this script with the working directory set to the directory you just made. The easiest way to do this is to paste the script in the same directory as the images then go into a shell (cmd on windows or the shell on a Unix-like OS) navigate to that directory and run the script.
Notes:
  • This is going to make thumbnail jpegs in a folder called customThumbs (because I was testing something and forgot to change it back the first version I put up will make it customThumbs2 please rename this folder to customThumbs if you used that version :)) and fill it with folders and .jpg2 files. These are just regular jpegs I used jpg2 so if you add more images to the folder and rerun the script it won't make thumbnails of the thumbnails. Right now the script will remake all thumbnails though, it doesn't check to see if they already exist.
  • These files are provided as is with no warranty of any nature
  • This doesn't touch videos and only works on jpegs. If you have RAW photos you'll need to convert them to jpegs. Personally I keep my RAWs backed up to a different folder then put my mastered jpegs on the photo station after all the thumbnails will all be jpegs regardless
  • Link to original article
  • If you're reading this please contact Synology and ask them to give me the details of the photo uploading protocol it would make this a lot easier, I asked them here search the page for RagingSloth
Comments

Still not dead haven't given up

So this article is taking a while longer than I thought. Summer is a busy time for my job and I was assigned a few secondary positions in the last couple of months. I have made some progress but the big question is how to make it simple for people to use. It is simple for me right now but I'm a computer engineer who happens to have written it :) I asked Synology for information on the upload protocol here: http://blog.synology.com/blog/?p=221 (search for RagingSloth on the page) and never heard anything (I really don't know why they would keep it secret?) If you're reading this right now please contact Synology and ask them to give me the details so I can make a really good photo uploader app. Right now I rely on altering the synology photo uploader and a two step process that would take me a while to explain and isn't as efficient as I can make it (though it is massively faster than the synology one which does a really bad job of managing memory, pipelining and making use of multiple cores.)
Comments

Not dead and Synology Uploader still coming

I'm not overly surprised that no one from Synology has contacted me about the protocol to up load photos with thumbnails. That being said I have started work on a simpler photo uploader but unfortunately my first efforts were a very disappointing experience with Kivy. It seems to have a lot of promise for simple apps but right now it seems to be mostly useful for games. Many of the widgets are still experimental including the scroll area and the tree view (which seem rather necessary to me) and at least the version I have registers all mouse events as clicks, for example having the cursor over a button and using the scroll wheel… I did like the current implementation of the file browser though. So anyway I'm going back to wxPython and hopefully I'll have something ready soon.
Comments

Hockey play offs are over

So we didn't win. We actually didn't make it to the semi-finals which was a rather major upset after being 2nd place most of the year. Such is life though I suppose. Hopefully I'll find some time tomorrow to get some useful stuff up for my NAS article. As I said I would I contacted Synology and asked about the protocol for uploading thumbnails to try and make a nice neatly packaged uploader for everyone. He responded that he'd forward my request to the developers but I haven't heard anything since then. It would be pretty sweet if they hooked me up :). If not I can still make the process much faster than it is but it will be more of a hassle (you'll have to make changes to the Synology assistant files as well as run a script to generate the thumbnails separate from the assistant to upload them.)
Comments

Update to NAS article. Still not finished

As the title says. The article still isn't ready to solve your problems but it now lays out how I approached the problem, what I plan on posting in the near future script-wise and what I want to accomplish in the end.
Comments

First part of Synology NAS Article up

I've got the first part of my Synology NAS article up. It's about what is wrong with Photo Station 5 on DSM 4.0 Synology NAS devices. I hope to have it finished in the next few days. What I've written so far is available here but keep in mind it's still a draft :).
Comments

Synology NAS Photo Speed fix

I kind of feel bad mentioning this without the article up yet but I spent way more time than I counted on getting to this point :). Basically I find it cavepersonish to not have access to my entire photo library from wherever I am over the internet and I don't want to pay a monthly fee to use someone else's cloud space. So that leaves the reasonably full featured Photo Station in my DS212j NAS. Of course as anyone who owns a Synology NAS knows it takes forever to get photos up on the damn thing as the process of generating thumbnails is ridiculous. Even with the Synology Assistant program large uploads (like the first time you put your library up) take an excruciating amount of time. For two systems one running two instances of the Synology Assistant (to try and make use of the second core) it took more than a full day to transfer 3455 files. I've tried quite a few things and now I have some scripts that have significantly shortened that timeframe. I'm not totally sure how long in total but it seems like the same library is going now at around 3-5 hours (it's still running and I'm not totally sure when I started the process) Also the thumbnails I'm generating are significantly smaller than those automatically made by the NAS. I'm going to put my scripts up and do a quick article on what I did and why, but right now I need to go to bed.
Comments