28.12.09

Methodological Reviews

A Methodological Review of a Master Thesis “Functions of Online Communities”

The review is based on a master thesis by Ivo Kiviorg, “Functions of Online Communities”, which is an empirical research paper. The research concentrates on studies considering working principles of the communities, and then proposes two questions, which ought to find out what online communities offered to their members and what tied the latter together in the communities. The author of the research paper intended to find an appropriate approach to study the areas of interest.

Although the author of the paper has not defined the used methodology, the paper is built upon quantitative research. It has the following features: (1) its structure is rigid, (2) literature review has a major role on the research questions and proposed 13 concepts, (3) data is measurable (it has a numerical format), and observable, (4) it uses statistical analysis of a survey and compares it with previous findings. (Mack et al., 2005)

The author of the research paper uses a cross-sectional survey design with a sample of 387 people from an online community Rada7.ee with a response rate of over 75%. Then survey data is analysed using factor analysis, principal component analysis method and Varimax rotation. Correlation design was also used to find out the motivations underlying social identification of the sample, because the relations between different questions are found in order to compile them into five factors.

The purpose of using quantitative methodology was to collect data in order to analyse the information about peoples’ ties in the communities to derive the answers to the questions that the author of the research paper initially asked. Ivo Kiviorg used such methodology in order to find out how various people identified themselves in the community and how the pre-given concepts varied among people.

19.12.09

A blogged review of "Social Media, Viral Marketing, and Crowdsourcing."

I reviewed a wiki of "Social Media, Viral Marketing, and Crowdsourcing".

The wiki was built up on Wikiversity platform, and the contributors were four students. The users had collaborated on one page, and provided a clear, neat table of contents. It was clearly understandable who contributed to which part.

Norbert Kaareste analysed today's social media. He had started from history of every aspect and the definitions were clear, although I would have wanted to read more about the topic in the wiki itself, not from additional resources. Norbert ended up with the future of social media. References were provided as links, and I could not understand why there was a set of links before references and then came the references. All in all I had a good impression on the topic. Norbert had a lot of personal input in the post.

Maris Üksti discussed viral marketing by citing various sources. She started from the history, giving examples and listing elements, then moving on to conclusion. I figured out what is viral marketing by the post, so I had a good impression of the content. References were again links, with reference section in the end. A mixture of linked words and links were presented in the text.  Maris had concluded the text nicely with her own words.

Indrek Saar wrote about crowdsourcing, and had a bit different structure from the rest of the participants. He had linked the words, and given links in the end (resources). Again the topic was clearly presented (some language mistakes prevented understanding minor sentences, which did not affect the whole meaning), and I got some new knowledge. Indrek had provided a lot of examples which helped understand the topic.

All the three inputs were similarly structured, so that it was clear that groupwork had been done. I was actually wondering what would the fourth person, Marek Mühlberg, write, and the answer was - a conclusion. What first struck my eye were the references, which were the most neatly presented. Inside the text, though, again I saw linked words and the url-s. He had interpreted all the three concepts a bit differently than they were initially presented, and had made a short analysis of the three. I think that it was good that the conclusion was presented differently, otherwise it would not have made sense.

I think that the blog lacked only minor co-operational factors, but otherwise it was readable and understandable: And what is most important, I got new knowledge out of it.

Ethics and Law in New Media, week eleven

Analyse both free software and open source approach in your blog. If you prefer one, provide your arguments.

The two do not have much of a difference, except that the first has four points in the definition and the latter has more than four (it keeps in mind the discrimination aspect, and license specifications). Another difference is in terminology, which is to my mind a word play. "Free software" leads me to think that it is not as similar as "open source", because initially I thought that the first term comes with no source code, but when I discovered that there is source code provided in both of these approaches, then I was a little bit confused, because I could not quite get the terminology. I looked through the Free Software Definition and the Open Source Definition besides the materials, and what I found is already written in the beginning.
I think I would prefer the second term, open source, because to me it defines more clearly what comes with the package. 

6.12.09

Ethics and Law in New Media, week ten

What could the software licensing landscape look like in 2015? Write a short (blogged) predictive analysis. 

It is difficult to predict what it will look like, but based on the text I read, there might be two various scenes - I picture the first with Microsoft still pushing its rights and making it easier with technology to track abusers of the contract. The second might be a battle between MS and GNU, which leads MS thinking what they could do better and how they could profit from that, like it has been till now. 

I do not actually believe that much is going to change in five years licence-wise. People in Estonia are used to Microsoft, IT directors install MS software by default (with the exception of some organizations like Tiger Leap Foundation who suggest that it would be good to use free, open source software), because users tend to be more acquainted with such software. In the Tiger Leap Foundation's computers there is a variation between Linux and MS Windows, what do you think most teachers use? Of course the last one (this is solely based to my own opinion and observations from my workplace). 


Perhaps Ottavio's idea, that Microsoft is going to change something radically - be it software or policies, holds true, but if Microsoft would have Online Word or Excel or whatever from that package, it would still be secondary to Google, and I think that they would think of the way to get benefit from it as well. To go on with the predictions from here, there have been discussions whether Google could maliciously use its userbase data to so-called "rule the world"? I doubt such actions, because it would drive the world mad, and I also doubt if Google would be ready for that. I guess not. Definitely there will be more free innovative online content, because education technology-wise the Internet is a rich facility.



Write a short analysis about applicability of copying restrictions - whether you consider them useful, in which cases exceptions should be made etc. 


I think that copyright restrictions should be made for educational and research purposes, as this could help to improve software. A good example (sadly not from Education) is Microsoft itself - with ALTAIR they launched a program, where users could improve the library and I believe this was for their own benefit as well, to get ideas from the users for free.
Restrictions could be limited as well when we want to avoid multiple programs for one thing - for example conversion to PDF is a good app with different programs, that way no multiple readers should not be used. This is an idea I got from the course text.
Also when I have purchased an item, it should not be prohibited for me to make more than one copy of the disk for personal use. Of course nowadays CD and such manual data carriers are outdated, Internet stores all information. Legal online storage permission of a purchased disk would also be nice.
Libraries and University facilities should be able to make copies and let their students use the materials. Here a good example is Tallinn University (I think Microsoft programmes are not the best example when we consider this course) where it is possible to use their programs during learning time. Of course lots of legal stuff is added to it, but still, for personal use it is OK.
I would consider all copying restriction exceptions useful, because I love free software, but most of all I prefer online solutions and I use them eagerly. Google has won my heart for now, I must admit - it's open for everyone and free to use.

5.12.09

Ethics and Law in New Media, week nine

Study the GNU GPL and write a short blog essay about it. You may use the SWOT analysis model (strengths, weaknesses, opportunities, threats).

GNU GPL licence was originally derived by Richard Stallman in 1989 in order to protect software and let various professionals develop the source code for free. Nevertheless, it is protected for the people to obtain the code but not put it under copyright licence or patent it. Free Software Foundation carries on fighting for free software promotion, and is allowed to make changes to the GPL when necesary.

The strengths of such licences are, as we all know, freedom of usage and distribution of software, which make the latter easy to distribute, but at the same time retain the rights to the creator of the software (code). This means that the modified versions automatically take the same rights as the original GNU General Public Licence (GPL), and are distributed under the very same conditions of the licence. In such cases the author is protected with copyleft, but the other users are free to distribute the software and also modify it, which adds to the quality of the software. The changes should be marked in order to track the code and notice flaws, which is also positive for this makes the program easy to change.
The licence itself is easy to read and short, so that everybody could get a grasp of the overall text and dig into it, which normally is skipped with long and difficult licence texts.
It is also possible to sell this kind of software under GPL licence provided that the source code is left open. Perhaps the last remark makes software programming easier as well.

The weaknesses of such licences are that the modified versions of the GPL do not necessarily comply with the previous ones, but it is possible to convert them back to previous versions, though. Perhaps a weakness is the fact that modifiers of the programs cannot modify the conditions of the licence due to the fact that the rights still belong to the initial creator, because GNU GPL copyleft is derived from copyright law. As for making business, the users are automatically bound to the licence, thus making them responsible for making sure that if they modify the code, other future users should abide the licence.
The licence does not give warranty to such programs, which may make it difficult to find support or the support may cost more than with commercial OS-s.


Opportunities with GPL are endless, because they enable people to be creative and distribute their creation either for free or for a fee. Other enthusiasts can amend the software and distribute it further, thus making a collaborative, worldwide effort to furbish programs or get new ideas from the others' work.

There is always a threat that GPL will be changed so that there will be some restrictions, like patenting, because the licence permits Free Software Foundation to make changes in the GNU GPL if any changes occur.

So with little threat, this kind of licences are the best way to distribute open and most of the time free software I think the benefits overweigh the doubts and weaknesses of such licences.

Find a good example of the "science business" described above [in the text] and analyse it as a potential factor in the Digital Divide discussed earlier. Is the proposed connection likely or not? Blog your opinion.

Well, if we think of the very same digitalization of books, especially important books, lots of people can benefit from it, starting from the poor Estonian students who cannot afford buying books in the current economic downfall finishing with those who cannot afford University education, for example, but want to get good knowledge of certain topics.
Universities have access to digital libraries but usually it is quite a hassle digging through the protective wall of registration and payment. I agree with Priidu, who said that providers of the materials need to be paid, but as Estonia's university lecturers run courses on Wikiversity, they get paid either through EITSA or the universities themselves, so they are not doing their work entirely for free. So I guess there are alternative payment methods. A good initiative is that lots of international universities have put their lectures online in video format or podcast, that brings the students a better opportunity to learn.

Ethics and Law in New Media, week eight

Study the Anglo-American and Continental European school of IP. Write a short comparative analysis to your blog (if you have clear preference for one over another, explain that, too).

I am trying to base my comparison on Laura Moscati's paper in which I found quite a good overview of these two systems after seeking for a long time for a plausible comparison.
The two systems differ mainly in two aspects where the Continental European school of IP (droit d'auteurs) protects the author's work throughout its various stages, and gives protection without formality. There is moral and economic right to be protected. The Anglo-American variant (Copyright Law) gives the right to reproduce copies in order to let the largest number of people access it. Before the work gets copyrighted, it has to be registered with Copyright Office after being reproduced and published. 
The paper also tells that America has recently joined Berne Convention of 1896. The list of countries from the US Copyright Office shows the countries that have relations with the US. Berne Convention countries are included. The abovementioned convention deals with authors' rights according to Continental European model, thus making the two schools of IP more similar than before. Laura Moscati also states that the initial roots of both schools come from the droit d'auteur of French origin. Copyright which emerged with the advent of the printing press, left its marks in both, the Continental European and the Anglo-American schools of IP.

I think that if I had to choose, I'd choose the Anglo-American school of IP, because the work is distributed to the largest number of people possible. I think that our course and also our speciality strives for ways how to distribute work to as many people as possible who need the work to progress, to create and gain new ideas from the past works. Copyright for copy-paste avoidance is fairly clear, but works should not be hidden behind financial benefit "curtains" if they delay the flow of thoughts and creations.