Dan Holme recently had an interesting piece of commentary in the SharePoint Pro magazine titled It’s Time to Break Up the SharePoint Brand in which he advocated breaking the product into purpose based bundles instead of the behemoth platform it is now. I agree with much of what Dan wrote, including his two main points that it is not a single purpose platform and operates more like a modern OS.
As I consider the points though, and my own experience evangelizing the platform with customers large and small, I would say that there is already serious confusion about the licensing. As Microsoft attempts to transform itself to a services based company, I would actually recommend a different sort of licensing move that is even more radical. When buying server based applications like SharePoint, there should be no need to also individually license the pre-requisite dependency technologies; specifically Windows Server and SQL Server. These are needed to run the system, so if a customer has a six server farm and busy six licenses of SharePoint Server, then they also require six licenses for Windows Server as well as at least one SQL Server license. As the number of SharePoint licenses increase, so would the associated Windows and SQL licenses.
While I think this move makes great sense for customers I am not holding my breadth that Microsoft will make this move any time soon. I think the biggest reason is the confusion of how to then map revenue back to the other divisions such as the core server OS group, and the database group, but really there should be a set percentage that they can agree on. I believe that this service or purpose based approach would simplify things quite a bit and should also apply to the other server applications like Exchange, Systems Center, etc. Of course there will still be a need for stand alone Windows Servers for traditional server things like AD, DNS, Web Sites.
As the move to the cloud continues to evolve this may become more of a moot point for many, but it is unlikely that even half of customers will be running all of their applications primarily in the cloud in the next five years.
I have had a few questions around de-duping files within a SharePoint environment recently so I set off to do some research to identify a good solution. Based on past experiences I knew that SharePoint identifies duplicates while performing an index of the content so I expected this would be part of the solution.
Upon starting my journey, I found a couple of threads on various forums where the question has been asked in the past. The first one was “Good De-Dup tools for SharePoint” which had a link to a blog post by Gary Lapointe that offered a PowerShell script that can list every library item in a farm. At first glance this seemed to be neat, but not helpful here.
Next I found a blog post with another handy PowerShell script. This blog post was title Finding Duplicate Documents in SharePoint using PowerShell. I found this script interesting, albeit dangerous. It will iterate through all of your site collections, sites, and libraries, hash each document and compare for duplicates. It however only identifies duplicate documents within the same location. The overhead of running this script is going to be pretty high, and it gets a little risky when you have larger content stores. I would be worried about running this against an environment that has 100s of sites, or large numbers of documents.
Next I found an old MSDN thread named Find duplicate files which had two interesting answers. The first was to query the database (very bad idea) and the second was a response by Paul Galvin that pointed to the duplicates keyword property, and a suggestion to execute a series of alpha wildcard searches with the duplicates keyword. While I have used the duplicates keyword before I had never thought to use it in this context so I set out to give it a try.
As I mentioned at the beginning SharePoint Search does identify duplicates documents. It does this by generating a hash of the document. Unlike the option above where the PowerShell generates a hash, the search hash seems to separate out the meta-data so even items with unique locations, meta-data, and document names can still be identified as identical documents.
When doing some tests though I quickly discovered that the duplicates property requires the full document URL. This means that you would have to execute a recursive search. First you would have to get a list of items to work with, and then you would then need to iterate through each of those items and execute the duplicates search with a query such as duplicates:”[full document url]”.
Conceptually there are two paths forward at this point. The first is to try and obtain a list of all items from SharePoint Search. Unfortunately you cannot get a full list of everything. The best you can do is the lose title search that Paul had suggested. Something like title:”a*” which would return all items with an a in the title. You would then have to go through and do that for all letters and numbers. One extra challenge is that you will be repeatedly processing the same items unless you are using FAST Query Language and have access to the starts-with operator and can do something like title:starts-with(“a”). In addition, since we are only looking for documents, its an extremely good idea to also add in the isdocument:true to your query to ensure that only documents are returned. Overall this is a very inefficient process.
An alternative would be to revisit and extend Gary’s original script to execute the duplicates search for each item. The advantage here is that you would guarantee that you are only executing the duplicates search once for each item which would reduce the total processing and extra output information to be parsed. The other change to Gary’s script would be to change what is written out to the log file since you would only write out the information for items that are identified as duplicates.
Like many people, my overall tech habits have changed quite a bit over the past few years. Where I used to work primarily off of only one or two computers and I had good separation between work and personal stuff the lines have gotten a bit blurry. Microsoft has quietly been really amping up the SkyDrive offering and has built it into a really powerful tool SkyDrive now gives me the flexibility to easily make my content accessible no matter where I am or what device I am using.
At this point I stay pretty busy between my consulting work, SharePoint community involvement, and attending various tech events. I find myself all over the place geographically, but also using a slew of devices. At this point I have a work laptop, a computer for home, a surface tablet, and my phone. Using SkyDrive I have easy access to that content on each of those devices. There is now great support for integrating SkyDrive into both the windows (desktop, phone) and non-windows experiences (iOS, Android).
When I find myself at an event I tend to rely on a tablet device for taking notes with OneNote (the best MS Office tool ever!), and storing those notebooks in the SkyDrive makes the content accessible, while also making sure it is properly backed up.
I also find that it is helpful for my writing tasks whether it is this blog, or even when I was writing the book for Packt previously. I had easy access to my notes, from both the regular devices, but also from within my VMs used to support the material. I’ve found the whole sync process pretty rock solid, and I’m now to the point that I really never use the My Documents folder on any of my windows PCs since all of the content is either stored in either SkyDrive or SharePoint (work and project related content).
Anyone else leveraging the tool? Has it had a positive impact on your work?
A big thank you to everyone that attended my session Where is the Content?!?!? Unlocking the Power of SharePoint Search as SharePoint Saturday Virginia Beach. The slide deck is available now on slide share here.
I hope everyone enjoyed the event!