Social Media for Scientists

facebook iconinstagram icon e1526381769741twitter icon

 

 

 

NB This post is more than a decade old.

social-media-for-scientistsTowards the end of October 2008, I received a flurry of emails asking me to check out new social networking sites for scientists, I’ve already reviewed the nanoscience community, of course. I suspect that, the academic year having moved into full swing, there were a few scientists hoping to tap into the power of social media tools and the whole web-two-point-ohhhh thing.

This from Brian Krueger:

“I came across your blog during my weekly google search for “science social network.” I thought you might be interested in my website, LabSpaces.net. It’s a social network for the sciences that I’ve had on-line for the last two years and I recently got my University to send out a press release about it. I think you should stop by and check it out. Let me know what you think, I’m always looking for suggestions on how to improve the site.”

LabSpaces has all of the features of a social-networking site with the addition of a daily science newsfeed, lab profiles, a science forum, blogs, and a science protocol database. Apparently, the site provides space for researchers to create their own user profile, add their publication history, upload technical research protocols, blog about science, and share research articles with the community. The site will soon host a free video conferencing service to facilitate long distance collaborations and journal clubs.

New Zealander Peter Matthews who works in Japan emailed:

“I am a full-time researcher from NZ, working in Japan, at a museum with many international research visitors. This multilingual environment made me very aware of: (1) the difficulties that non-English based researchers face when using English, and (2) the difficulties that English mono-linguals face when trying to access or publish research in other important research languages, such as Spanish, Chinese, Japanese, French, and so on. Hence my website: The Research Cooperative – http://cooperative.ning.com. Please have a look, join if you want, and please tell any friends and colleagues about this site if you think they might find it useful.”

Pascal Boels, Managing Director of SurgyTec.com emailed with a medical tale:

“Our website is for and by medical professionals. It’s a video-sharing site for surgeons and medical professionals to show off their newly minted skills. It makes it easy for medical professionals to upload videos or slideshows and share those with the community. You can search for videos by specialty, organ/region, tissue, etiology, operation type, or technique. Many surgeons perform original and high-quality techniques in their operating room and equally many surgeons would like to learn from these new and inspiring techniques. Up till now it was very difficult, time consuming and expensive to take a look in each others operating room and share practical knowledge, tips and tricks. Surgytec.com provides the solution for this problem. We are currently serving over 4000 surgeons from more than 124 countries, sharing over 400 procedures

Priyan Weerappuli had long been interested in scientific research but felt that applied research was guarded by private institutions while basic research was held within the confines of colleges and universities by overpriced journals and an oversimplification that occurred whenever research results were translated for more general audiences. His forum/platform will attempt to open this research to a general audience – http://www.theopensourcescienceproject.com

Some correspondents are claiming they’re approaching web 3.0 nirvana:

ResearchGATE is proud to announce a major update: We greatly improved our search functionality and called it ReFind. The name symbolizes the importance of an efficient and result-driven search functionality within research in general and within our network in particular. ReFind is one of the first search engines based on semantic, “intelligent” correlations. It enables you to find groups, papers, fellow researchers and everything else within and outside of ResearchGATE without having to read through dozens of irrelevant results. Just type a few sentences into ReFind or simply copy and paste your abstract. Our semantic algorithm will then search the leading databases for similar work, providing you with truly relevant results.” [Sounds like my Zemanta/ResearchBlogging.org idea, DB]

One observer pointed out, however, that ResearchGate’s semantic search is maybe not the greatest thing to happen to search in a decade (especially, when we have the likes of True Knowledge Ubiquity, and Zemanta. Indeed, some users have said it is not much of an improvement on conventional search.

Then there was:

“ScienceStage.com – Science in the 21st century – A wide forum for science – on an interdisciplinary, international and individual level. ScienceStage.com, the only universal online portal for science, advanced teaching and academic research, bridges a major gap in scientific research and learning. ScienceStage.com is a virtual conference room, lecture hall, laboratory, library and meeting venue all in one.”

But, perhaps the best is saved for last. An Oxford graduate student, who has completed his PhD, Richard Price, has launched Academia.edu, which he says does two things:

“It displays academics around the world in a ‘tree’ format, according to which institution/department they are affiliated with. And, it enables researchers to keep track of the latest developments in their
field – the latest people, papers, and talks.”

Price wants to see every academic in the world on his tree and already has Richard Dawkins, Stephen Hawking, Paul Krugman, and Noam Chomsky as members. But, that’s the hype what about its potential? It resembles BioMedExperts because both use a “social” publishing tree, but is that enough to engage scientists?

It will be interesting to see whether any of these sites gain the traction their creators hope for and how things will pan out as the credit crunch bites harder. “There are a bunch of them out there,” Krueger told me, “It’s kind of scary how many came out after Nature and I went on-line in 2006. There’s definitely a lot of competition out there, it seems like a new one appears every month. I wonder how the economy and loss of tech funding is going to affect the larger start-ups.”
Then, there are those perhaps more well-known social media sites and networks for scientists, some of which are mentioned in Sciencebase and its sibling sites (tomorrow), in no particular order:

  • Nature Network – uber network from the publishing giant (discontinued December 2013)
  • BioMedExperts – Scientific social networking
  • BioWizard – Blogged up Pubmed search
  • Mendeley – Digital paper repository and sharing
  • Labmeeting (blog) – Ditto
  • YourLabData – socialised LIMS
  • SciLink – Sci-Linkedin
  • Myexperiment.com – mostly workflows.
  • Laboratree.org similar to Researchgate. Not particularly social beyond groups and sharing documents with collaborators, but email is better, and arguably more secure.
  • scitizen.com – collaborative science news publishing
  • SocialMD – Med-Linkedin
  • Ozmosis – Ditto
  • DNA Network – network of DNA/genetics bloggers
  • ResearchCrossroads – Socialised grant databases
  • MyNetResearch – Socialised LIMS at a price
  • SciVee – YouTube for scientists (see also Watch with Sciencebase page
  • Scientist Solutions – science chat
  • Twitter science group and Scientwists list

There are so many, I can barely keep up, but if you have any you think I should add to the list, let me know via the comments box below. Or, more importantly, if you have used any of these systems please leave your thoughts.

Meanwhile, my apologies if you were expecting a lesson in how to use the likes of Twotter, FiendFreed, Ding, Pyuke, or Facebok’s feeble science apps, to help you get on in science socially, but I thought it was about time I did some linking out to the web 3.0 brigade in the world of science, so here they are.

Open Access in Africa

development-heatmap-africaThere is much talk about Open Access. There are those in academia who argue the pros extensively in all fields, biology, chemistry, computing. Protagonists are making massive efforts to convert users to this essentially non-commercial form of information and knowledge.

Conversely, there are those in the commercial world who ask, who will pay for OA endeavours and how can growth (current recession and credit crunch aside) continue in a capitalist, democratic society, without the opportunity to profit from one’s intellectual property.

Those for and against weigh up both sides of the argument repeatedly. However, they often neglect one aspect of the concept of Open Access: how they might extend it to the developing nations, to what ends, and with what benefits.

Writing in a forthcoming paper in the International Journal of Technology Management, Williams Nwagwu of the Africa Regional Center for Information Science (ARCIS) at the University of Ibadan, Nigeria and Allam Ahmed of the Science and Technology Policy Research (SPRU) at the University of Sussex, UK, suggest that developing countries, particularly those in Sub-Saharan Africa (SSA), are suffering from a scientific information famine. They say that beginning at the local level and networking nationally could help us realise the potential for two-way information traffic.

The expectation that the internet would facilitate scientific information flow does not seem to be realisable, owing to the restrictive subscription fees of the high quality sources and the beleaguering inequity in the access and use of the internet and other Information and Communication Technology (ICT) resources.

Nwagwu and Ahmed have assessed the possible impact the Open Access movement may have on addressing this inequity in SSA by removing the restrictions on accessing scientific knowledge. They highlight the opportunities and challenges but also demonstrate that there are often mismatches between what the “donor” countries and organisations might reasonably offer and what the SSA countries can actually implement. Moreover, they explain the slow uptake of Open Access in SSA as being related to the perception of the African scientists towards the movement and a lack of concern by policymakers.

The researchers suggest that the creation of a digital democracy could prevent the widening information gap between the developed and the developing world. Without the free flow of information between nations, particularly in and out of Africa and other developing regions, there may be no true global economy.

“Whatever might emerge as a global economy will be skewed in favour of the information-haves, leaving behind the rich resources of Africa and other regions, which are often regarded as information have-nots,” the researchers say. It is this notion that means that it is not only SSA that will lose out on the lack of information channels between the SSA and the developed world, but also those in the developed world.

The current pattern of the globalisation process is leaving something very crucial behind, namely the multifaceted intellectual ‘wealth’ and ‘natural resources’ of Africa,” they add. “The beauty of a truly globalised world would lie in the diversity of the content contributed by all countries.

From this perspective, they say, the free flow of scientific articles must be pursued by developing countries, particularly SSA, with vigour. “African countries should as a matter of priority adopt collaborative strategies with agencies and institutions in the developed countries where research infrastructures are better developed, and where the quest for access to scientific publication is on the increase.”

They suggest that efforts could begin locally having found that even within single institutions in most African countries, access to scientific articles is very scant. “Local institutions should initiate local literature control services with the sole aim of making the content available to scientists,” they suggest.

Proper networking of institutions across a country could then ease access to scientific publications. One such initiative in Nigeria has started under the National University Commission’s NUNet Project but wider support from governments is necessary to build the infrastructure. Research oriented institutions could use their funds to grant free access to their readers, especially given that many already pay subscription fees for their readers in large amounts.

Meanwhile, can music bring open relief to Africa?

Williams E. Nwagwu, Allam Ahmed (2009). Building open access in Africa International Journal of Technology Management, 45 (1/2), 82-101 I put in a request with the publishers for this paper to be made freely available, it is now so. You can download the PDF here.

Sex and Social Networking

social-sexUltimately, the only truly safe sex is that practised alone or not practiced at all, oh, and perhaps cybersex. However, that said, even these have issues associated with eyesight compromise (allegedly), repetitive strain injury (RSI) and even electrocution in extreme cases of online interactions (you could spill your Mountain Dew on your laptop, after all). And, of course, there are popups, Trojans, packet sniffers and viruses and worms to consider…

No matter how realistic the graphics become in Second Life or how good the 3rd party applications in Facebook, however, unless you indulge in direct human to human contact in the offline world, you are not going to catch a sexually transmitted disease, STD. Real-world social networking is, of course, a very real risk factor for STD transmission, according to a new research report in the International Journal of Functional Informatics and Personalised Medicine. This could be especially so given the concept of six five-degrees of separation through which links between individuals are networked by ever short person-to-person-to-person bonds.

According to Courtney Corley and Armin Mikler of the Computational Epidemiology Research Laboratory, at the University of North Texas, computer scientist Diane Cook of Washington State University, in Pullman, and biostatistician Karan Singh of the University of North Texas Health Science Center, in Fort Worth, sexually transmitted diseases and infections are, by definition, transferred among intimate social networks.

They point out that although the way in which various social settings are formed varies considerably between different groups in different places, crucial to the emergence of sexual relationships is obviously a high level of intimacy. They explain that for this reason, modelling the spread of STDs so that medical workers and researchers can better understand, treat and prevent them must be underpinned by social network simulation.

Sexually transmitted diseases and infections are a significant and increasing threat among both developed and developing countries around the world, causing varying degrees of mortality and morbidity in all populations.

Other research has revealed that approximately one in four teens in the United States will contract a sexually transmitted disease (STD) because they fail to use condoms consistently and routinely. The reasons why are well known it seems – partner disapproval and concerns of reduced sexual pleasure.

As such, professionals within the public health industry must be responsible for properly and effectively funding resources, based on predictive models so that STDs can be tamed. If they are not, Corley and colleagues suggest, preventable and curable STDs will ultimately become endemic within the general population.

The team has now developed the Dynamic Social Network of Intimate Contacts (DynSNIC). This program is a simulator that embodies the intimate dynamic and evolving social networks related to the transmission of STDs. They suggest that health professionals will be able to use DynSNIC to develop public health policies and strategies for limiting the spread of STDs, through educational and awareness campaigns.

As a footnote to this research, it occurred to me that researchers must spend an awful lot of time contriving acronyms and abbreviations for their research projects. Take Atlas, one of the experimental setups at the Large Hadron Collider at CERN in Geneva Switzerland. Atlas stands for – “A Toroidal LHC ApparatuS”. So they used an abbreviation within their acronym as well as a noise word – “A” and the last letter of one of the terms. Ludicrous.

But, Atlas is not nearly as silly as the DynSNIC acronym used in Corley’s paper, I’m afraid. Dynamic Social Network of Intimate Contacts, indeed! I thought the whole idea of abbreviating a long research project title was to make it easier to remember and say out lead. DynSNIC, hardly memorable (I is it a y or an I, snic or snick or sink or what. Students will forever struggle with such contrivances. They could’ve just as easily used something like Sexually Transmitted Infections Contact Social Intimate Networks – STICSIN. This would be a double-edged sword that would appeal to both to the religious right and to the scabrous-minded, depending where you put the break (after the Contact or after Social.

Courtney D. Corley, Armin R. Mikler, Diane J. Cook, Karan P. Singh (2008). Dynamic intimate contact social networks and epidemic interventions International Journal of Functional Informatics and Personalised Medicine, 1 (2), 171-188

Compare and Compare Alike

Back in June 2001, I reviewed an intriguing site that allows you to compare “stuff”. At the time, the review focused on how the site could be used to find out in how many research papers archived by PubMed two words or phrases coincided. I spent hours entering various terms hoping to turn up some revelationary insights about the nature of biomedical research, but to no avail.

I assumed the site would have become a WWW cobweb by now, but no! compare-stuff is alive and kicking and has just been relaunched with a much funkier interface and a whole new attitude. And as of fairly recently, the site now has a great blog associated with it in which site creator Bob compares some bizarre stuff such as pollution levels versus torture and human rights abuses in various capital cities. Check out the correlation that emerges when these various parameters are locked on to the current Olympic city. It makes for very interesting reading.

Since the dawn of the search engine age people have been playing around with the page total data they return. Comparing the totals for “Company X sucks” and “Company Y sucks”, for example, is an obvious thing to try. Two surviving examples of websites which make this easy for you are SpellWeb and Google Fight, in case you missed them the first time around.

compare-stuff took this a stage further with a highly effective enhancement: normalisation. This means that a comparison of “Goliath Inc” with “David and Associates” is not biased in favour of David or Goliath.

Compare-stuff with its new, cleaner interface now takes this normalisation factor to the logical extreme and allows you to carry out a trend analysis and so follow the relative importance of any word or phrase. For example, “washed my hair”, with respect to a series of related words or phrases, for example “Monday”, “Tuesday”, “Wednesday”…”Sunday”. The site retrieves all the search totals (via Yahoo’s web services), does the calculations and presents you with a pretty graph of the result (the example below also includes “washed my car” for comparison).

Both peak at the weekend but hair washing’s peak is broader and includes Friday, as you might expect. It’s a bit like doing some expensive market research for free, and the cool thing is that you can follow the trends of things that might be difficult to ask in an official survey, for example:

You can analyse trends on other timescales (months, years, time of day, public holidays), or across selected non-time concepts (countries, cities, actors). Here are a few more examples:

Which day of the week do people tidy their desk/garage?

At what age are men most likely to get promoted/fired?

Which popular holiday island is best for yoga or line dancing?:

Which 2008 US presidential candidate is most confident?

Which day is best for Science and Nature?

As you can see, compare-stuff provides some fascinating sociological insights into how the world works. It’s not perfect though. Its creator, Bob MacCallum, is at pains to point out that it can easily produce unexpected results. The algorithm doesn’t know when words have multiple meanings or when their meaning depends on context. A trivial example would be comparing the trends of “ruby” and “diamond” vs. day of the week.

The result shows a big peak for “ruby” on “Tuesday”, not because people like to wear, buy or write about rubies on Tuesday, but because of the numerous references to the song “Ruby Tuesday” of course.

However, since accurate computer algorithms for natural language processing are still a long way off, MacCallum feels that a crude approach like this is better than nothing, particularly when used with caution. Help is at hand though, the pink and purple links below the plot take you to the web search results, where you can check that your search terms are found in the desired context; in the top 10 or 20 hits that is. On the whole it does seem to work, and promises to be an interesting, fast and cheap preliminary research tool for a wide range of interest areas.

With summer well under way, Independence Day well passed, and thoughts of Thanksgiving and Christmas coming to the fore already (at least in US shops), I did a comparison on the site of E coli versus salmonella for various US holidays. You can view the results live here, as well as tweaking the parameters to compare your own terms.

Originally posted June 4, 2007, updated August 19, 2008

Finding Experts

Finding expertsOne of the main tasks in my day-to-day work as a science writer is tracking down experts. The web makes this much easier than it ever was for journalists in decades since. There are times when a contact in a highly specialist area does not surface quickly but there are also times when I know for a fact that I’ve already been in touch with an expert in a particular area but for whatever reason cannot bring their name to mind. Google Desktop Search, with its ability to trawl my Thunderbird email archives for any given keyword is a boon in finally “remembering” the contact.

However, finding just a handful of contacts from web searches, email archives and the good-old-fashioned address book pales into insignificance when compared to the kind of industrial data mining companies and organisations require of their “knowledge workers”.

According to Sharman Lichtenstein of the School of Information Systems at Deakin University, in Burwood, Australia, and Sara Tedmori and Thomas Jackson of Loughborough University, Leicestershire, UK: “In today’s highly competitive globalised business environment, knowledge workers frequently lack sufficient expertise to perform their work effectively.” The same concern might be applied to those working in any organisation handling vast amounts of data. “Corporate trends such as regular restructures, retirement of the baby boomer generation and high employee mobility have contributed to the displacement and obfuscation of internal expertise,” the researchers explain.

The team explains how knowledge is increasingly distributed across firms and that when staff need to seek out additional expertise they often seek an internal expert to acquire the missing expertise. Indeed, previous studies have shown that employees prefer to ask other people for advice rather than searching documents or databases. Finding an expert quickly can boost company performance and as such locating experts has become a part of the formal Knowledge Management strategy of many organisations.

Such strategies do not necessarily help knowledge workers themselves lacking the search expertise and time required to find the right person for the job, however. So, Jackson developed an initial expertise locator system, later further developed with Tedmori, to address this issue in an automated way. The researchers discuss an automated key-phrase search system that can identify experts from the archives of the organisation’s email system.

Immediately on hearing such an intention, the civil liberties radar pings! There are sociological and ethical issues associated with such easy access and searchability of an email system, surely? More than that, an expert system for finding experts could become wide open to misuse – finding the wrong expert – and abuse – employees and employers unearthing the peculiar personal interests of colleagues for instance.

The first generation of systems designed to find experts used helpdesks as the formal sources of knowledge, and comprised simply of knowledge directories and expert databases. Microsoft’s SPUD project, Hewlett-Packard’s CONNEX KM system, and the SAGE expert finder are key examples of this genre, the researchers point out. Such systems are akin to Yellow Pages and are essentially electronic directories of experts that must be maintained on a continual basis. They allow anyone with access to tap into expertise, but unless the experts keep their profiles up to date, they can quickly lose relevancy and accuracy.

Overall, when large numbers of employees are registered and profiles are inaccurate, credibility is rapidly lost in such systems which are increasingly ignored by knowledge seekers.

Second generation expertise locators were based on organisations offering their staff a personal web space within which they could advertise their expertise internally or externally. Convenient for those searching but again relying on the experts in question to keep their web pages up to date. Moreover, simple keyword matching when searching for an expert would not necessarily find the best expert because the search results would depend on how well the expert had set up their web pages and whether and how well they had included keywords in those pages. In addition, keyword searching can produce lots of hits that must then be scanned manually, which takes time.

The third generation of expert searching relies on secondary sources, such as tracking the browsing patterns and activities of employees to identify individual experts. Such an approach raises massive privacy concerns, even for companies with a strict web access policy. Activity on forums, bulletin boards, and social networks falls into this third generation approach.

The fourth generation approach mashes the first three and perhaps adds natural language searching again with various efficiency and privacy concerns. Again, it does not necessarily find the best expert, but often just the person whose data, profile, and web pages are optimised (deliberately or by chance) to reach the top slot in the search results.

An approach based on key-phrase identification in e-mail messages could, however, address all requirements but throws up a new wave of privacy concerns, which Lichtenstein and colleagues discuss.

There are several features of email that make it popular and valuable for organisational knowledge work, and relevant to to finding an expert:

  • It attracts worker attention
  • It is integrated with everyday work
  • It provides a context for sense-making about ideas, projects and other types of business knowledge
  • It enables the referencing of work objects (such as digital documents), and provides a history via quoted messages
  • It has high levels of personalised messages which are appealing, meaningful and easily understood
  • It encourages commitment and accountability by automatically documenting exchanges
  • It can be archived, so providing valuable individual, collective and organisational memories that may be mined
  • It facilitates the resolution of multiple conflicting perspectives which can stimulate an idea for a new or improved process, product or service.

All these factors mean that email could become a very useful tool for finding experts. Already many people use their personal email archives to seek out knowledge and experts, but widen that to the organisational level and the possibilities become enormous.

The researchers have developed an Email Knowledge Extraction (EKE) system that utilises a Natural Language ToolKit (NLTK) employed to build a key-phrase extraction “engine”. The system is applied in two stages, the first of which “teaches” the system how to tag the speech parts of an email, so that headers and other extraneous information become non-searched “stop words” within the email repository. The second stage extracts key-phrases from the searchable sections of an email once it is sent. This extraction process is transparent to the sender and takes just milliseconds to operate on each email. A final stage involves the sender being asked to rank each identified key-phrase to indicate their level of expertise in that key-phrase area. A database of experts and their areas of expertise is gradually developed by this approach. Later, employees searching for experts can simply consult this database.

The EKE system has been implemented at Loughborough University and at AstraZeneca in trials and found to be able to capture employee knowledge of their own expertise and to allow knowledge workers to correctly identify suitable experts given specific requirements. The researchers, however, highlights the social and ethical issues that arise with the use of such as system:

  • Employee justice and rights and how these might conflict with employer rights.
  • Privacy and monitoring, as there is more than a small element of “Big Brother” inherent in such a system
  • Motivational issues for sharing knowledge, as not all those with expertise may wish to be data mined in this way, having enough work of their own to fill their 9-to-5 for instance
  • Relationships, as not everyone will be able to work well together regardless of expertise
  • Ethical implications of expert or non-expert classification, as the system could ultimately flag as experts those employees with little or no expertise.
  • Deliberate misclassification of experts, as all systems are open to abuse and malpractice.
  • Expert database disclosure, as such a comprehensive database if accessed illicitly by an organisation’s rivals could wreak havoc in terms of stealing competitive advantage, headhunting or other related activities.

Lichtenstein, S., Tedmori, S., Jackson, T. (2008). Socio-ethical issues for expertise location from electronic mail. International Journal of Knowledge and Learning, 4(1), 58. DOI: 10.1504/IJKL.2008.019737

Fair Use Rights

Creative Commons frownIntellectual property, copyright, creative commons, copyleft, open access… These are all terms high on the science and other agenda these days. For example, public-funded scientists the world over are calling for research results to be available free to them and their peers for the public good and for the good of scientific advancement itself. Librarians likewise are also interested in the fullest dissemination and sharing of knowledge and information, while user-creators and the new breed of citizen journalists that are the result of the Internet Age are also more liberal in their outlook regarding the proprietary nature of creative works.

On the other hand, traditional publishers, database disseminators, and the commercial creative industry consider the investment they put into the creation and distribution of works as a basis for the right to charge readers and users and for profit-making. Meanwhile, adventurous organisations that are not necessarily beholden to shareholders, to other commercial concerns, and to learned society memberships, are experimenting with alternative business models with varying degrees of success.

One aspect of copyright that arises repeatedly in any discussion is what is considered fair use and what kind of usage warrants a cease & desist order from the owner of copyright in their works.

Now, Warren Chik, an Assistant Professor of Law at Singapore Management University, is calling for a reinvention of the general and flexible fair use doctrine through the simple powerful elevation of its legal status from a legal exception to that of a legal right.

Writing in the International Journal of Private Law, 2008, 1, 157-210, Chik explains that it is the relatively recent emergence of information technology and its impact on the duplication and dissemination of creative works – whether it is a photograph, music file, digitised book, or other creative work – that has led to a strengthening of the copyright regime to the extent that it has introduced “a state of disequilibrium into the delicate equation of balance that underlies the international copyright regime”.

Copyright holders have lobbied for their interests and sought legal extension to the protection over “their” creative works. But, the law in several countries has undergone a knee-jerk reaction that is not necessarily to the benefit of the actual creator of the copyright work or of the user. Chik summarises the impact this has had quite succinctly:

The speedy, overzealous and untested manner in which the legal response has taken has resulted in overcompensation such that the interests of individuals and society have been compromised to an unacceptable degree.

For some forms of creative works, such as music and videos, there has emerged a protectionist climate that has led to the creation of double protection in law the form of the digital rights management (DRM) system and anti-circumvention laws that allows copyright owners to prosecute those that attempt to get around such restrictive devices. This, Chik affirms, has “inadvertently caused the displacement of the important fair use exemptions that many consider the last bastion for the protection of civil rights to works.”

Chik points out that this tightening of the laws run counter to the increasing penetration of electronic forms of storage and communication, the borderless nature of the Internet and the invention of enabling technologies such as the so-called “Web 2.0”. This in turn is apparently leading to a general social shift towards more open collaborative creativity, whether in the arts or the sciences, and what he describes as “the rise of a new global consciousness of sharing and participation across national, physical and jurisdictional borders.”

Whether that view is strictly true or not is a different matter. At what scale will those who like to share a few snapshots among strangers or a small-scale collaboration between laboratories realise the need for a more robust approach to their images and data? For example, if you are sharing a few dozen photos you may not see any point in protecting them beyond a creative commons licence, but what happens when you realise you have tens of thousands of saleable photos in storage? Similarly, a nifty chemical reagent that saves a few minutes in a small laboratory each week could take on global significance if it turns out to be relevant to cropping a synthesis in the pharmaceutical industry. Who would not wish to receive full credit and monetary compensation for their creative works in such cases?

Chik proposes not to destroy or even radically overhaul the present copyright regime, instead he endorses a no less significant reinvention of the general and flexible fair use doctrine through the simple powerful elevation of its legal status from a legal exception to that of a legal right, with all the benefits that a legal right entails. This change, he suggests could be widely and rapidly adopted.

Currently, he says, fair use exists formally only as a defence to an action of copyright infringement. But, DRM and other copyright protection threaten this defence and skew the playing field once more in favour of copyright holders. “Fair use should exist in the law as something that one should be able to assert and be protected from being sued for doing,” Chik says.

Such a change will render copyright law more accurately reflective of an electronically interconnected global society and also acknowledge the importance and benefits of enabling technologies and its role in human integration, progress and development.

Chik, W. (2008). Better a sword than a shield: the case for statutory fair use right in place of a defence. International Journal of Private Law, 1(1/2), 157. DOI: 10.1504/IJPL.2008.019438

Identifying Digital Gems

DOI logoSciencebase readers will likely be aware that when I cite a research paper, I usually use the DOI system, the Digital Object Identifier. This acts like a redirect service taking a unique number, which might look like this assigned to each research paper by its publisher and passing it to a server that works out where the actual paper is on the web.

The DOI system has several handlers, and indeed, that’s one of its strength: it is distributed. So, as long as you have the DOI, you can use any of the handlers (dx.doi.org, http://hdl.handle.net, http://hdl.nature.com/ etc) to look up a paper of interest, e.g. http://dx.doi.org/10.1504/IJGENVI.2008.018637 will take you to a paper on water supplies on which I reported recently.

The DOI is kind of a hard-wired redirect for the actual URL of the object itself, which at the moment will be a research paper. It could, however, be any another digital object: an astronomical photograph, a chemical structure, or a genome sequence, for instance. In fact, thinking about it, a DOI could be used as a shorthand, a barcode, if you like, for whole genomes, protein libraries, databases, molecular depositions.

I’m not entirely sure why we will also need the Library of Congress permalinks, the National Institutes of Health simplified web links, as well as the likes of PURL and all those URL shortening systems like tinyURL and snipurl. A unified approach, which perhaps worked at the point of origin, the creator of the digital object, which I’ve suggested previously and coined the term PaperID, would seem so much more straightforward.

One critical aspect of the DOI is that it ties to hard, unchanging, non-dynamic links (URLs) for any given paper, or other object. Over on the CrossTech blog, Tony Hammond raises an interesting point regarding one important difference between hard and soft links and the rank that material at the end of such a link will receive in the search engines. His post discusses DOI and related systems, such as PURL (the Persistent URL system), which also uses an intermediate resolution system to find a specific object at the end of a URL. There are other systems emerging such as OpenURL and LCCN permalinks, which seek to do something similar.

However, while Google still predominates online search, hard links will be the only way for a specific digital object to be given any weight in its results page. Dynamic or soft links are discounted, or not counted at all, and so never rank in the way that material at the end of a hard link will.

Perhaps this doesn’t matter, as those scouring the literature will have their own databases to trawl that require their own ranking algorithms based on keywords chosen. But, I worry about serendipity. What of the student taking a random walk on the web for recreation or perhaps in the hope of finding an inspirational gem? If that gem is, to mix a metaphor, a moving target behind a soft link, then it is unlikely to rank in the SERPs and may never be seen.

Perhaps I’m being naive, maybe students never surf the web in this way, looking for research papers of interest. However, with multidisciplinarity increasingly necessary in many cross-disciplines it seems unlikely that gems are going to be unearthed through conventional literature searching of a parochial database that covers a limited range of journals and other resources.

Make Music, Boost Brain

Power of musicI’ve played guitar – classical, acoustic, electric – for over three decades, ever since I pilfered my sister’s nylon string at the age of 12, although even before that, I’d had a couple of those mini toy guitars with actual strings at various points in my childhood. Even though I never took a single guitar lesson, I eventually learned to follow music and guitar tablature, but was only really any good at keeping up with a score if I’d already heard someone else play the music, it don’t mean a thing if it ain’t got that swing…after all.

Meanwhile, I took up singing in a choral group (called bigMouth) and have felt compelled to become ever so slightly more adept at reading music in a slightly more disciplined environment than jamming on guitars with friends. Big Mouth formed in the autumn of 2007 and we meet weekly for singing practice and have now done a few small “local” gigs. We even put together a last-minute audition video tape for the BBC’s Last Choir Standing, but didn’t make it through to the heats, (un)fortunately.

Anyway, that’s probably enough detail. The point I wanted to make is that until I joined Big Mouth and began making music regularly with a group, I’d always felt like I was quite useless at remembering people’s names. Like many people I’d always had to make a real conscious effort to keep new names in mind. However, in the last few months, with no deliberate action on my part, I’ve noticed that I seem to remember stuff like fleeting introductions, the names of people mentioned in conversations, or press releases and other such transient data much better than before.

I’m curious as to whether it’s the ever-so-slightly more formal discipline of group music practice that’s done something to the wiring in my brain or whether it’s simply to do with expanding one’s social group in a sudden burst like this. Ive heard of people claiming increased brain power after taking music lessons, here you can find piano teaching resources. It’s probably a combination of both and my suspicions about the power of music for boosting the brain are bolstered somewhat by a recent TED talk from Tod Machover and Dan Ellsey on the power of music

I also wonder whether there’s some connection with the Earworms concept for language learning, which I reviewed back in 2006.

A Wrench for Social Engineering

Social engineering attacks, what used to be known as a confidence, or con, tricks, can only be defeated by potential victims taking a sceptical attitude to unsolicited approaches and requests for privileged information and resources. That is the message that arrives from European researchers.

Most of us have received probably dozens of phishing messages and emails from scammers on the African continent seeking to relieve us of our hard-earned cash. Apparently, these confidence tricksters are so persuasive that they succeed repeatedly in hustling funds even from those among us with a normally cynical outlook and awareness of the ways of the world.

On the increase too are cowboy construction outfits and hoax double-glazing sales staff who wrest the life savings from senior citizens and so-called boiler room fraudsters who present get-rich-quick schemes so persuasively that thousands of unwitting individuals lose money totalling millions of dollars each year.

Con artists and hustlers have always preyed on greed and ignorance. As the saying, goes a fool and their money are easily parted. However, the new generation of social engineers, are not necessarily plundering bank accounts with promises of riches untold, but are finding ways to infiltrate sensitive databases, accounts, and other resources, using time-honoured tricks and a few new sleights of hand.

Now, Jose Sarriegi of the Tecnun (University of Navarra), in San Sebastian, Spain, and Jose Gonzalez, currently in the department of Security and Quality and Organizations, at the University of Agder, Norway, have taken a look at the concept of social engineering, and stripped it down to the most abstract level (International Journal of System of Systems Engineering (2008, 1, 111-127)). Their research could lead to a shift in attitude that will arm even the least sceptical person with the necessary social tools to spot an attempt at social engineering and stave off the attack with diligence.

Fundamentally, the researchers explain, social engineering is an attempt to exploit a victim, whether an individual or organization, in order to steal an asset, money, data, or another resource or else to make some resource unavailable to legitimate users in a denial of service attack or in the extreme instigate some terrorist, or equally destructive, activity.

Of course, a social engineering attack may not amount to a single intrusion, it could involve layer upon layer of deceptions at different places and on different people and resources. The creation of a sophisticated back-story, access to less sensitive resources, and targeting of the ultimate goal is more likely to be a dynamic process. This, the researchers suggest, means that looking for “heaps of symptoms”, as might occur in attempting to detect someone breaking into a computer system, is no longer appropriate and a dynamic response to a dynamic attack is more necessary now than ever before.

Recognising the shifting patterns of an ongoing and ever-changing social engineering attack means better detection of problems low in the metaphorical radar, the team suggests. Better detection means improved efficacy of security controls. The best defence is then to build, layer-by-layer, feedback loops that can catch an intruder at any of many different stages rather than relying on a single front-line defence that might be defeated with a single blow.

Online Science

How can science benefit from online social media?

My good friend, Jean-Claude Bradley of Drexel University, a chemist and host of the UsefulChem Blogspot blog, who is very keen on the use of information technology and the notion of the open notebook was first to respond to my question when I asked a variety of contacts for their opinions: “For me the answer is clear: it is a great way to find new collaborators whom I would otherwise not have met.” I’d have to agree, I’ve known JCB for quite some time now, although we’ve never even shaken hands. He was one of the early interviewees for my Reactive Profiles column. We didn’t meet virtually through online media, however, but through a mutual friend Tony Williams, then of ACD/Labs and now increasingly well known as ChemSpiderman.

Erik Mols, a Lecturer in Bioinformatics at Leiden University of Applied Science, The Netherlands, echoed JCB’s remark: “It gives me the opportunity to discuss with people I never would have met,” he said, and added that, “It creates possibilities for my students to do their internship abroad.”

Another good friend, Egon Willighagen, who is a post-doc at Wageningen University & Research Center, provided a quite detailed answer: “It provides one with the means to mine the overwhelming amount of information,” he says, “For example, searching for some scientific piece of software is more targeted when I search amongst bookmarks of fellow bio/chemoinformaticians than if I were to search Google.” He points out that the Web 2.0 services are most useful when one’s online friends have labelled or tagged particular tools, or better still commented or rated them, as can be done with http://del.icio.us/, for instance. This concept holds just as true for publications, courses, molecules, and other content.

Willighagen points out that conventional search engines do fill an important gap (WoS, Google, etc), “But, they lack the ability in itself to link this with expert knowledge,” he says, “This is particularly why Google, I think, is offering all sorts of services: to find a user profile from a mining point of view. FOAF, social bookmarking, etc, makes such profiles more explicit, allowing more targeted search results.”

Personal contact Joerg Kurt Wegner, a scientist at Tibotec (Johnson & Johnson), suggested that my original question might be couched in slightly different terms: “The question is rather why ‘social science’ is different to ‘editorial science’?”

He suggests that one of the best visualizations for this difference is Alexa’s web ranking statistic comparing Wikipedia and Encyclopaedia Britannica. Wikipedia is a social information gathering process and Britannica is an editorial process. The graph shows that Wikipedia increased its access and popularity dramatically compared to Britannica. “Based on this, I would conclude that the benefit (not only the plain access) is higher for the social service,” Wegner says. He then emphasises that there is indeed a shared problem among scientists, that of information overload.

“Honestly, I cannot see how any editorial process can cope with this problem,” says Wegner. Social software in contrast might be able to tackle this challenge. “Social software is driven by small dedicated user groups (oligarchies),” he explains, “So, compared to an editorial process the number of ‘real’ contributors might actually not be higher. However, the enrichment of diverse and really interested people might be better. If you think that you need for science the smartest set of diverse people, then ‘social software’ cannot be a bad choice, right?”

Wegner suggests that anyone who does not believe this to be the case should carry out a search for their collaborative partners using conventional information sources. The likely result once again will be information overload. More information but no increase in our reading capacity. “Information overload solutions and social software looks like a matching relationship to me,” he adds. The final obstacle is for social software, web 2.0, online networking, social media, whatever you want to call it, to be accepted by the majority and to mature. “Has social software reached a mature status in Gartner’s hype cycle,” asks Wegner, “that means that even conservative people will realize that it is highly recommended to adopt this technology. The question here is also not if science benefits from social media, but how steep the benefit curve is. The longer you wait, the flatter the benefit curve.”

Deepak Singh of the business|bytes|genes|molecules blog adds that, “Historically communication among scientists was limited, e.g. you could get together with your peers from around the world at a conference, or through newsgroups. That’s where collaborations were born, but the scale was limited out of necessity.” Things have changed significantly. “Today, with resources like open wet-ware, etc, and more avenues for online conversation, including blogs and wikis collaborations become a lot easier and feasible.”

In addition, Singh suggests that science is no longer restricted to peer-reviewed publications as the only means of formal communication within the scientific community. “You could publish a paper and blog about the back story, or like some others, e.g. Jean-Claude Bradley, you could practice Open Notebook Science.” He points out that the likes of videos and podcasts only add to the options now available for communicating science.

Nature NetworkHowever, there is another thread to the idea of social media benefiting science and that is that it could also benefit the public with respect to science. “For some reason,” says Singh, “science ended up becoming this silo and preserve of the experts and we ended up with a chasm between experts and others.” Social media could close this gap and make it easier to create virtual communities of people who have common interests, like to share their knowledge, are just curious about things, or are lobbyists and others. “One area where I see tremendous opportunity is education,” Singh adds, “whether through screencasting, or podcasts, or just video lectures and wiki-based learning, that’s probably the one area where I am most hopeful.”

Find David Bradley on Nature Network here and on Nanopaprika nano science network here.