I started this post as a collection of ways to implement Revit into your practice. As I wrote, I relaised that actually these principles are fairly sound for any software implementation. If you are an IT professional – you should know all this already and I apologise, but this post is not for you. If you do not have a dedicated IT department or if you are the architect tasked (taxed) with implementing Revit (or Office 2007 etc.) – then read on.
A successful implementation will rely on a few commonsense rules and a lot of hard work. Ensure you know what Revit is, I don’t mean that to sound trite, but ensure you know what it is that you are wishing to get from the software. Have expectations, begin them fairly low, but know what it is you want and check that the software will be able to deliver. You will be sorely disappointed if after 6 weeks of work you cannot produce what you had hoped.
Choose a small, simple project to trial Revit on. Begin with simple geometry. Choose a project that is homed in one office (assuming that you may have more than one to begin with), this will make it easier to manage and also to address any issues quickly and simply. Feedback will be given and you can train, teach and mentor as needed.
Have a project champion – this person should be respected, have credibilty and gravitas within the office and should be reasonably senior. They do not have to be actively using Revit, though it would help, but should be actively championing the cause and rallying the team when the going gets tough.
There will be an element of cultural change required. Your champion should be able to help with this. But the whole team needs to know that they will be working differently. The project must be executed differently. A lot of work will be done up front before anything sensible can be delivered. I would strongly advise against changing the model close to the deadline for printing and submission. Allow a few days to “lock “the model in order to get prints out. If you don’t – you might get yourselves in a very bad mess.
Be realistic about what to model and what to draw. Some items should be modeled – some merely drawn. Take for example doors. Model them. They will be used often, they will need scheduling, they will be altered, they have common characteristics (jamb, frame and so on). I wouldn’t model them to the fine detail on the door pattern or handles for example. Draw these if you are planning to render. By not modeling every detail you will be able to save time and use your efforts better elsewhere.
For the project you are undertaking – consider the advantages of Revit for different types of work. Residential – clients understand models and visualisations, they are not so keen on floor plans. The rendered model is key here. Hospitality – changes to items within hotel rooms such as beds or baths can flow through the whole model and make large changes easier to bear. Communicate these advantages wt your team, it will help them see the bigger picture and the advantages to what you are doing.
Revit and BIM (Building Information Model click here for Wikipedia definition) can change the scope of services you offer. These can be extended and you can work closer with other professions, structural or environmental engineers for example. Be sensible – don’t offer your services until you have discovered what you can and cannot achieve as a team. Don’t bite off more than you can chew.
In short, for a successful Revit implementation – begin small, plan, communicate, train and support. These are absolutely crucial for the success of any project. By doing these five things you will engage the practice, engage the client and be able to change your CAD strategy and ultimately deliver better value and work more effectively.
Augmented reality is the overlaying of digital information over real world imagery in real time – a mix of computer graphics and live video if you will. An example of this you will have seen already is watching sport – the live video has information such as scores and other data broadcast with it the replay shows direction and trajectory of the ball. The beauty of augmented reality is that the observer can then interact with the digital part and pull up information relating to the video.
Imagine walking down a UK high street with your phone on camera – you stop and view the high street for a restaurant, on your phone an overlay shows you menu items pulled from the restaurant’s online menu, shows you reviews from newspapers and so forth.
Science fiction? No, this is available right now from a startup called Layar, with content localised from Yellow Pages, Google, Flickr and Wikipedia (http://www.layar.eu).
Modern smartphones such as iPhones and Google Android devices can determine their location through GPS and an internal compass, they can download data through mobile broadband connections, and they have reasonably powerful graphics-processing capabilities. These features make up the necessary ingredients for mobile augmented reality.
Whilst consumer applications have come first, the possibilities are endless for retail, medicine, education, engineering and construction.
Imagine standing at a construction site – viewing it with the wireframe model overlaid.
What value would that have for the client or in planning submission or public consultation?
This is far better than a traditional 2D CGI or expensive model. As nice as they are, CGIs and models do not place the viewer in the site; they do not have context and relevance. But actually visualising the building or space in its real position, albeit a muddy field, will speak volumes.
Imagine being able to click on a balcony four floors up and get the flat’s information – number of bedrooms, sales cost, floor plan, the environmental specifications etc. As a potential buyer this would be fantastic.
And being able to “view” the shadows of buildings play across the plot and any existing buildings thorough a time-lapsed year – what would that be worth?
By blending augmented reality with local social media sites – blogs and wikis set up to allow comment on new developments, one could obtain residents’ actual (and future) comments, images and questions on the design resulting in a very interactive and pertinent consultation.
During construction, site visits could be augmented by being able to view the actual versus the planned in 3D whilst at site, simply pointing youriPhone at the building and seeing the actual and the digital overlaid.
Post-construction, facilities management and maintenance could walk round the finished building – being able to “click” on the building components and getting specifications, data, construction methods, or being able to control the elements – HVAC, security, fire, lift logic and so forth. This would be further enhanced by the use of BIM (Building Information Model) CAD tools and software in the design process.
If you are in IT, imagine being able to scan a floor space and obtain network diagrams and floor port information super-imposed over the top, the helpdesk to-do list superimposed over your colleagues heads as you walk bout, or to view a PC or server rack and peruse its environmental information and alert logs – and then to able to dip into control and rectify.
The possibilities for this “new” technology are constrained only by our own visions of use of technology and the hardware with which to support it.
At the moment, companies are nibbling at the edges of the technology, with no commercial products yet on the market, but with all the opportunities out there it is surely only a matter of time before someone grasps the mantle.
If you would like to investigate a little further, Wikitude is an augmented reality application available now for both iPhone and Google Android phones. It overlays Wikipedia information on the image http://www.wikitude.org/world_browser – a little buggy, but it is early days and I firmly believe that applications such as this will literally change the way we view and interact with our environment. There is great potential here for truly life improving applications, the internet is going mobile and search is going graphical and contextual. It will be a brave new world.
The federal trade commission in the States announced this week that any goods or monies received by an individual blogging should be declared. I see no problem in that whatsoever, but it has caused quite a stir and mixture of opinions – most of those against it are bloggers and I think they doth protest too much!
The internet is built upon openness, transparency and trust. I use it for all manner of things, one of which is product and service research. I use certain sites and forums and infer quality, usefulness, appropriateness from those sites, some of which include blogs. In my opinion – if you are being paid (cash or kind) to review or push a product it will colour your judgement. If I know that the source from which I am reading is not independent I can take that into consideration, not necessarily discount it, but at least I know the blogger’s judgement is tainted somewhat.
I have seen all sorts of criticisms about this being “Big Brotherly” and unconstitutional. Whilst I am not an expert on American law I do have a moral backbone, deceiving your readership is amoral, taking covert back-handers or “bribes” is amoral. Just rectify the situation by stating that you have received a gift from the company’s marketing department. Freedom of the press has nothing to do with being bought out by corporate America. We the people, can then make our judgements on the unconstitutional nature – or in the case of being British just the ethics.
The opinions against from my reading so far, appear to evolve around disclosure in 140 characters on Twitter, my answer would be that on your Twitter page link to your blog or website and disclose on there – pragmatic. Also, that it doesn’t cover off false advertising claims and other media endorsements such as celebrity endorsements on TV – but I belive it does. But if it doesn’t, campaign for that to be included – not against it; strive for change for the better and not reactionary nay-saying. The internet is ours, indeed the media is now ours too – we need to make things better moving forwards not stamp our feet because we don’t like the rules.
For reference to the scale of the issue:
“The Word of Mouth Marketing Association, an industry group for social and viral marketing specialists, says $1.35bn was spent on social media marketing in 2007, and that will reach $3.7bn by 2011″ – source The Guardian http://bit.ly/dnCrY
Dan Gillmores blog referring to the scale of the problem from the blogger’s point of view, http://bit.ly/3eIT8f
The concept of a virtual conference is not a new one; its roots are firmly embedded with a history of audio and later video conferencing. What sets it aside is the ability to interact with the other participants and to accurately and converse and discuss. The basic tenet of ensures that all participants are indeed looking at exactly the same file and discussing the exact same piece of information – no more checking of which page are we talking about or describing in detail the area of graphic or drawing being discussed and therefore running the time consuming risk of talking at cross-purposes.
The web meeting can be had in either an ad-hoc or more structured manner and from the pleasure of one’s desktop or laptop, no complicated nor expensive equipment is needed merely some software and a network connection. This is leaps and bounds on from the days of sharing screens on video conferencing – there is virtually no jerkiness or stuttering of the video.
From a work process perspective the beauty comes in being able to screen share with a geographically diverse located team and quickly hammer out an issue. Control of the mouse and application can be given to other parties to further facilitate the discussion. There is no need for all to have the– it is being “shared” for the duration of the conference. From a green perspective, there is no travel involved – the carbon footprint is very, very low. From a personal perspective there is no time spent travelling – time that could be better spent in the office or at home. Of course the cost is much, much less as well, typically a license could cost between £6 and £30 per month (though depending on the vendor there may be a minimum number to purchase).
Many of them are easily adapted to providing seminars or eLearning – training diverse teams on small subjects. It could be updates to the intranet or new CAD standards – I would suggest no more than a lunchtimes worth of training otherwise it becomes onerous.. The software will let delegates post questions and the training session can be recorded for offline playback at alter date. Some will let the trainer know who is focused – that is to say who is actually watching the session and who is reading their email whilst logged in to the session.
Well known vendors include Adobe with ConnectPro. Lesser known, though equally good and useful include and Beam Your Screen (who are unique in being a UK based company). Many offer different prices depending on the number of users and whether it is one-to-many or many-to-many. In terms of choosing a vendor – I would suggest trialing a number – maybe one of the well known vendors and one of the less so for comparison. All systems offer a try before you buy option or have free versions which typically offer 2 or 3 attendees. Look out for latency – how long the other end has to wait before the screen changes, other features such as recording the session and if can be included in the cost.with their Live Meeting, Citrix with their GoToMeeting, Cisco with WebEx,
So in conclusion, you should be doing this already; if you are not then you are missing a trick. You will be saving money, saving time, saving the planet and devoting more effort to creative thinking and providing excellent service to your clients.
Having said all this however, it cannot replace face-to-face interaction. The key to success in using web meetings is to know the limitations. Whilst web meetings may be quick and efficient, do not expect to generate group decisions, inspire and engender teamwork or build relationships with clients.
Software is expensive, more than the cost of the hardware on which it is being run. It needs to be treated as the valuable asset it is and carefully managed. In order to do this you must keep very accurate records of its purchase, from whom and where it is installed.
Purchase software assurance, this will let you legally upgrade and keep current. Some software has license management built in; it is worth investigating and using network license managers if you can.
Undertake an at least annual audit of what is installed.
Do not let staff bring in software from home or install themselves. Consider using .
Invest in (SAM) software and consultancy to monitor installed software versus licenses owned, but more importantly their appropriate use too. If you have any doubts as to legitimacy you should check with the software author or one of its partners.
Server virtualisation involves making two or more “logical” servers on one physical server. That is, we have one “box” with one processor (though more are possible and will make things run better), one set of memory and one set of hard disks — but onto that, we build multiple instances of servers for various uses. Each of these servers is totally independent of the other and exists only in software — they are of course linked by the shared hardware. Each has its own name and network address, and can be rebooted without affecting the others on the same box.
Typically, these servers could be used for any function, but the specifications of the server must grow in accordance with their use. For example, a box with virtual servers which perform the functions of active directory, DNS, a small web server and printing will need fewer processors, memory and disks than one with a couple of databases running the corporate intranet and document management functions.
So what’s it for? If space and budgets are tight, it is useful to make the best use of hardware. Practices tend to deploy single applications to single servers. In the first example above, I would require four actual servers, but in the virtual world, only one. The magic lies in the fact that most servers run at around 5-10% capacity most of the time.
Virtualisation makes best use of this by consolidating many servers into one — some typical ratios are 10:1 and 15:1. This could save you a lot of time and money.
Virtualisation is also very quick if you need to roll out new servers with new applications or websites. Traditionally, you might have bought a new server, waited for delivery, constructed it and racked it into the cabinet, put Windows on and patched it. With virtualisation, you can simply build a new virtual server on an existing physical server and be up and running in hours rather than days or weeks.
Downsides? It is important not to overload the physical server with too many virtual servers and swamp its resources. The physical server you use also needs some redundant components — hard disks, power supplies, fans.
If you need to power down the box to make a change to one of these items, you will now be affecting many servers and functions rather than just one.
Cost, resource efficiency and speed of provisioning are the key drivers, although the price you might pay is having all your eggs in the same basket.