Raging Sloth

House is a mess but Beta almost here

So my house purchase is a mess right now. The owners didn't mention a few expensive safety related defects during the price negotiation and now they don't want to bother fixing them.

On the Beta front however I've switched to using pgmagick (an interface for GraphicsMagick) instead of PIL and everything is working. The colour related issues I saw with PIL are gone though for some reason the thumbnails don't seem as sharp. I think the convert utility is using resizing algorithms that aren't available through pgmagick as it only connects the C++ interface and not the C one. It is also possible that unsharp mask isn't working correctly (or perhaps at all). The speed difference is still there though I haven't done any large scale comparisons yet to confirm it is exactly the same I know that CPU utilization is minimal so I'm still bound by the upload process. I run unsharp mask with the same options that the Synology uploader does and I've even added in regular sharpening but it doesn't seem to be doing much. In the end though the colours are correct this time and I think I've reached the point of good enough. Anyway I've spent far too much time on this tonight so I'll try to clean it up and get it out to my volunteer testers tomorrow.
Comments

You know what makes it hard to concentrate on programming?

Negotiating the purchase of a house :) I'll try to get the Beta finalized soon though and hopefully have bought a house too.
Comments

Quick Update

Just as a quick update I'm out of town all this week with only my macbook. Unfortunately python modules are not all that easy to install on OSX… After a lot of tweaking and testing and obscure linker and compilation errors I'll hopefully get some stuff worked on tonight. I'd like to have a setup where normal people can easily install everything they need and run the app on any OS so I'll probably bundle everything together for mac so the various compilations won't be needed. Anyway I finally got pgmagick working so progress continues.
Comments

Beta Testing Update

So I noticed that the thumbnails I was getting were a little strange looking compared to the DSAssistant. Turns out the library I picked isn't embedding colour profiles so I'm going to swap it out for PythonMagick which is a python hook to the same library DSAssistant uses. I have been busy all week and will be next week if I don't get it done Sunday it might not be till early June.
Comments

Beta is done just Hopefully I'll be able to distribute tomorrow

It is late and the Beta is done. I timed one of my test runs and my code came in at 54 seconds (HTTPS) and 46 seconds (HTTP) compared to 2 minutes 8 seconds for the DSAssistant uploader (keep in mind the NAS was not under controlled conditions there may have been some time machine backups going on I'll need to reboot the NAS at some point and figure out just where I'm at performance wise). Of course there are a few things to consider. For one my code isn't performing unsharp mask while the DSAssistant is, though I have a bit of a beast of a machine and it seems to grind through the unsharp mask a lot faster than my old one did. As well the upload process seems to be pretty slow and since my code is parallel I'm thinking I can add unsharp mask almost for free time-wise. I have a few other things to look into to speed things up, for example right now all the thumbnails are generated in separate processes and then returned to the main process to be uploaded from one place. If I have each worker process upload its own images that might speed things up dramatically (most likely all the images are being sent over a socket back to the main process). I could also try just parallelizing the uploads to begin with but the NAS seems to be the weakest link in this process anyway so I might not get any additional speed. I guess I should also mention my approach can use HTTPS while the DSAssistant cannot (important if you want to upload from outside your home network since your password will be sent in the clear…) and my approach creates all the images in memory so no temporary files, oh and from what I can tell DSAssistant seems to be using the file modified property of the files to determine when they were taken while my approach checks for when the photo was created in the Exif data (this would explain why my photos never seem to be in the proper order when I sort them by date) and one last thing even if I can get the files to upload faster the NAS already can't keep up with the upload process so even after the upload it will be a while before PhotoStation is up to date if you have a large upload. Anyway I have work tomorrow and really have to get to bed. The week isn't a good time for me (work and all) but I'll try to get the uploader released ASAP.

BTW if anyone just got a NAS and wants to do a huge upload let me know in a comment.
Comments

So… Close...

Well, time for bed for tonight. I got a later start than I thought I would but I've got all the necessary UI done. It could use some usability work like progress bars and such and it currently loads the entire PhotoStation directory structure on startup which takes way too long but I have all the UI I need to get the work done. I just have to plug my thumbnail generation code into it now but first to bed. Hopefully I'll have a Beta up tomorrow (though I also have to figure out the best way to package it).
Comments

Progress report

So I have pretty much all the code I need for the PhotoStation portion and I'm about 25% of the way through the UI code to turn it into a useable product. Unfortunately it is now Sunday night and I have work tomorrow so this likely won't get finished for a few days. I'm doing the UI with wxGlade which is in general an awesome tool but really needs an undo button… You think you have a control selected and hit delete and then a whole window is gone…
Comments

Almost done of my third party PhotoStation uploader

So right now I have a script that will successfully upload a single file and save login information and such in an encrypted file accessible through a single password (hard to explain but took a good deal of time) this way you can use a simple password for the uploader and have a more complex password for the NAS saved so it isn't in cleartext on your hard drive. Right now the script just uploads the same file for all of the thumbnails and for the original file and only works on a single file at a specific location. So obviously I need to change it so it will accept any file or collection of files and combine it with the thumbnail generation code I already have (and adjust it a bit as there is no longer any need to save intermediate files to disk it can all be done from main memory.) So I need to use wireshark to sort out how things change when you upload more than one file and how to query directories and make new ones. I would probably have a totally working uploader script right now if I'd left the encryption out but what can I say I am security minded and I liked the idea of branching out into a different module for a while.
Comments

A Long time coming and almost there...

Thanks to a comment pointing out the use of a particular php file in the uploading of thumbnails to a Synology NAS by Louis Somers I'm hot on the trail of making a useful and easy to use alternative file uploader. Turns out that there are actually a few PHP files in the mix (it was driving me crazy trying to figure out how that one alone could do things but a little more digging into the file system and it looks like there are at least 5 server scripts involved.) I've already got super fast multithreaded python code to generate the thumbnails so once I can figure these scripts out it won't be long.

**Update - after a good deal of struggling I've come to realize a few things.
  1. There are two php files in use. One gives a list of files that are going to be transferred and the other is called for each thumbnail that is uploaded
  2. The file uploads are done with PUTs rather than POSTs to php files (this was really confusing when reading the php file and seeing references to files but nowhere where they are taken from the request…)
  3. Even though my NAS is setup to prefer HTTPS all of these transfers are done over unencrypted HTTP which means I spent a lot of time setting up proxies and things so I didn't have to mess with my settings to spy on the requests when the answer in the end was wireshark

**Update 2
  1. There are three php files in use because while you can use HTTPAuth to access files on the NAS you need to be specifically logged in to photo station to be able to do things (I've got this working)
  2. The second file sets up the transfer for a particular image (I've got this working)
  3. The third file is called multiple times with PUT. Once for each Thumbnail and once for the original file. (I have a test case for a single thumbnail working)
  4. In retrospect the original comment mentioned the use of PUT but I lost track of that in the confusion from the fact the system uses HTTP headers and accesses them through the _SERVER object when it really should be using parameters. This caused untold problems because I duplicated the wireshark output in setting my parameters including termination codes and the python requests library went all nuts on me when I included those codes… I spent like 3 hours debugging just to find out I had to delete \r\n from my header values...
  5. I'm done for the night but I now have all the basic code needed to make a third party uploader. I expect a working script tomorrow and a GUI to follow (not sure how long this will take as I haven't used a python GUI library in quite a while.)
Comments