Browse Definitions by Topic

Web 2.0

Web 2.0 is a technique for designing systems that, by taking into account network interactions, become better when more people use them.

In fact, the term "Web 2.0" refers to projects and services that are actively developed and improved by users themselves: blogs, wikis, social networks, etc.

The peculiarity of Web 2.0 is the process of attracting users to filling and multiple reconciliations of information material.

Collective mind in action

"Web 2.0" is not a new version of the "World Wide Web," but only continues to use the technologies and concepts of "Web 1.0". Many of the ideas in Web 2.0 had already been in existence on the network long before the term Web 2.0 came into being. For example, the Amazon.com site allows users to write reviews and manuals since its inception in 1995.

Key tags

Keywords describe the object, or relate it to a category. These are sort of tags that are assigned to an object for search purposes.

The emergence and rapid spread of blogs also fits into the concept of Web 2.0, creating a so-called "writable web".

The ability to mark the document with the keywords exists in HTML language, however, this method was completely compromised by its wide use for search spam.

Socialization of the site

Use of functionality that allows you to create a community.

  • In the concept of socializing the site, you can also include the ability to customize the content and create a personal zone (personal files, images, videos, blogs) for the visitor to make the user feel unique and involved.
  • Encouraging, supporting and trusting "collective intelligence".

When forming a community, an important element is competition, such as good reputation, which allows the community to self-regulate and give users additional goals on the site.

Design

The concept of Web 2.0 is also reflected in the design. Roundness was preferred, imitation of convex surfaces, reflections in the manner of glossy plastic of modern hi-end devices (for example, players). The graphics of such sites take more volume, rather than using the ascetic design. In part, this trend is related to the concurrent release of new versions of operating systems that use the above ideas. Along with the graphics, Web 2.0 has a tendency to significantly increase the font sizes for the importance of content, especially for headings, to express them more clearly against a background of colorful graphic design, as well as providing text content with more space.

However, the graphic appearance of the classic Web 2.0 design is considered obsolete and not creative. This is especially reflected in the current trend of creating informative websites, where simplicity, elegance, graphics and usability play a major role.

Additional Terms
Keyword
is a word in the text that gives a concise description of the content of a text document, which allows user to better identify its subject matter. Keywords in the Web development are used mainly for searching and are the main way to organize content. Keywords in Text Analysis Key words in text analysis, (including when building an index in search engines), are especially important and representative of a particular nature of words in a text, the set of which can give a high-level description of its content for the reader. Key words (KW) are characterized by the following traits: Frequency - the most common denote the feature of the object, state or effect; Represented by significant vocabulary, sufficiently generalized in their semantics, degree of abstraction, and style. Interrelation - connected with each other by a network of semantic links, intersections of meanings; If the KWs are repeated too often in the text, the search engines may regard this as spam and not promote the given page. The KW set defines the index of words, their frequency, and predictability. Keyword in the markup of web pages In HTML, to specify keywords, there are meta elements with the respective keywords. This way of specifying keywords opens up even more opportunities for abuse. As a result, only some search engines use this metatag as a factor to improve the ranking of the pages, while others don’t. Historically, this aspect was overused in SEO and is now ignored by the leading search engines, like Google. For example, Google often ignores the keywords in the tag, because of too much abuse in the past. However, they are used by other user agents (for example, web browsers for searching bookmarks). In XHTML microformats, the keywords describing the document are presented as a list of links, each of which should lead to a page containing a list of documents that also has this keyword. Thus, the possibility of abuse is somewhat reduced, since each link should lead to real content. For such keywords, the term "tags" are more often used, and at the code level they are implemented using the micro-format tag-rel.
>> Metadata
discloses information about the characteristics and properties that describe any entities that allow to automatically search and manage them in large information flows. The Difference Between Data and Metadata It is usually impossible to make an unambiguous division into data and metadata in a document because: Something can be both data and metadata. Thus, the title of an article can be simultaneously referred to as metadata (as a metadata element - the title), and to the actual data (since the title is part of the text itself). According to the usual definition, metadata is a set of structured information. You can create metadata for metadata, for output to special devices, or read their descriptions using text-to-speech software. Classification of metadata Metadata can be classified by The content. Metadata can either describe the resource itself (for example, the name and size of the file), or the content of the resource (for example, "this video shows how to play football"). The resource as a whole. Metadata can refer to a resource as a whole or to parts of it. For example, "Title" (movie name) refers to the movie, and "Scene description" (the description of the movie episode) is separate for each episode of the film. Logical inference. Metadata can be divided into three layers: the bottom layer is raw data; middle layer - metadata describing the specified "raw" data; and the top layer is metadata, which allows you to make a logical conclusion using the second layer. The three most commonly used metadata classes are: Internal metadata, which describes the structure or constituent parts of a thing. For example, the format and size of the file. Administrative metadata required for information processing. Such as, information about the author, the editor, the date of publication, etc. Descriptive metadata that describe the nature of a thing, its attributes. For example, a set of information-related categories, links to other subjects related to the the item in question. In search engine optimisation SEO-experts concentrate on the concrete part of metadata - HTML-tags: <title>,< description>,< h1>,< keyword>.  It’s the particular examples of metadata.
>> Search Engine
Search engine (SE) is a computer system designed to search for information. The most well-known search engine applications are web services for searching text or graphic information on the World Wide Web. There are also systems that can search for files on FTP servers, products in online stores, and information in news groups. To search for information using a search engine, the user formulates a search query. The job of the search engine is to find documents containing either the specified keywords or words related to the user's request. In this case, the search engine generates a search results page. Some engines also extract information from suitable databases and resource directories on the Internet. Search and maintenance methods are divided into four types of search engines: systems using search robots, systems controlled by humans, hybrid and meta. The architecture usually includes: A search robot (crawler) that collects information from websites or from other documents, Index, providing a quick search for the accumulated information, and Search engine - a system with a graphic user interface for the user. How does the search engine work? As a rule, systems operate in stages: The crawler receives the content; The indexer generates an index DB with structured data that is searchable; The SE provides functionality for searching indexed data. To update the search engine’s collected information, this indexing cycle is repeated. Search engines work by storing information about web pages, receiving their HTML code and URLs(Uniform Resource Locator). A crawler is a program that automatically passes through all the links found on the page and highlights them. A crawler, based on references or based on a predefined list of addresses, searches for new documents not yet known to the search engine. The site owner can exclude certain pages using robots.txt, which can be used to prevent the indexing of special files, pages or directories of the site. The search engine analyzes the content of each page for further indexing. Words can be extracted from headers, page text or special fields - meta tags. A separate crawler is looking for new URLs via scanning the links in the internet. Another robot is visiting each of the new pages to analyse the information and add it to the indexed DB. An index is a module that analyzes a page by first breaking it into parts using its own lexical and morphological algorithms. All the elements of the web page are isolated and analyzed separately. Data about web pages is stored in the index database for use in subsequent queries.
>>
Additional Terms of SEO
See more words
Net Promoter Score (NPS)
Net Promoter Score (NPS) is an index that identifies customer loyalty to a product or company and is used to assess readiness for re-purchases. How It Works Measuring the NPS loyalty index involves several steps: Consumers are asked to answer the question “What is the probability that you would recommend a company/product/brand to your friends/acquaintances/colleagues?” On a 10-point scale, where 0 corresponds to the answer “I will not recommend it in any way”, and 10 - “ I will surely recommend. " Based on the estimates obtained, all consumers are divided into 3 groups: 9-10 points - product/brand promoters, 7-8 points - passives, 0-6 points - detractors. Calculation of the NPS index itself. NPS =% supporters -% critics As a result, the the user’s loyalty score calculated on the scale from -100 to 100. If all the customers are willing to recommend the product, the score will be about 90-100, if they are not willing to recommend it - the NPS will drop to -90-100 points.   NPS trade mark was registered for the marketing tool, which automates the calculation of the above mentioned data. History Frederick Reichheld is considered the founder of the method, who first announced the method in the article “One Number You Need to Grow”, published in the Harvard Business Review in December 2003. In 2006, he released a book entitled “The Ultimate Question: Driving Good Profits and True Growth”. He continued his arguments on the loyalty, profitability and growth of the company. In 2001, Reichheld conducted research in more than 400 American companies, where the main task was to measure the influence of customer loyalty (measured by NPS) on its growth rate. The main result was the conclusion that the average NPS by market in the industry was 16%, but for companies such as eBay and Amazon NPS it was 75%. Reichheld does not say that communication is present everywhere: it is absent altogether in monopolistic markets. However, industries such as passenger air travel, insurance and car rental have become a prime example of interconnection. This is obvious, since these companies are service providers, where customer satisfaction and loyalty depend on the level of customer service. As a result, many companies have become adherents of this technology, including Apple, American Express,  eBay, Amazon, Allianz, P & G, Intuit,, Philips, etc. For certain industries, especially software, it has been proven that Detractors often stay with the company while Passives leave.  This seems to be a relatively high barrier to trade. Faced with criticism of the promoter's score, proponents of the network promoter's approach stated that the proposed statistical analysis only proved that the "recommendation" problem was similar to other indicators in predictive capacity, but failed to solve the real problem and this is the core of the argument presented by Reichheld. Proponents of the method also argue that third-party data analysis is not as good as analyzing the company in its own set of customers, and the actual benefits of the method (simple communication concepts, short survey, customer follow-up features ) exceed any statistical disadvantage of the approach. They also allow inquiries using any other issues to be used in the net promotion system, as long as it meets the criteria to securely classify customers as promoters, passives and detractors.
>>
5G
is the fifth generation mobile communication technology based on the IMT-2020 standard. The speed of Internet access in the 5G network is predicted at around 10 Gbit/s. 5G reduces the signal delay to one millisecond - against 10 milliseconds on 4G networks and 100 milliseconds in 3G. New generations of mobile communication appear every 10 years. Within this interim, time is spent on the development of technology, standards and infrastructure upgrades. It is expected that the 5G network capacity will be enough to serve more than 1 million devices per 1 km² at an average speed of 100 Mbps. Read more about 5G opportunities for business. Who Deals with 5G Networks in the World Today 5G technologies are used by: research laboratories (for example, the 5G Lab Germany laboratory at the Dresden Technical University); mobile operators (British Vodafone, American Verizon and AT&T, Japanese NTT DoCoMo, Swedish Teliaetc); telecom equipment suppliers Swedish Ericsson, (Chinese Huawei, Finnish Nokia, South Korean Samsung, etc.). 5G Applications These apps and services require significantly higher characteristics of a mobile Internet connection, which cannot be implemented in existing commercial LTE networks. It is expected that 5G networks will allow connecting many devices capable of establishing billions of connections, due to which it will be possible to create new services in: Tactile Internet (transmission of touch), IT and Telecom, automotive industry - self-driving cars, entertainment industry, education, agriculture and many others. Due to the 5G networks, it will also be possible to improve the quality of use of already existing services, where large volumes of traffic are involved. Launch of the World's First 5G Network October 1, 2018 Verizon announced the launch of the world's first commercial network of the fifth generation (5G). The operator has deployed it in four US cities: Sacramento, Houston, Los Angeles, and Indianapolis. The company officially declared Houston resident Clayton Harris "the first customer of the 5G network in the world,” which provides an average speed of 300 Mbit/s, and the maximum of 940 Mbit/s.
>> Node.js
is a server platform for working with JavaScript through the V8 engine. JavaScript performs the action on the client side, and Node let the commands, written on JS to be implemented on the server. With Node, front-end programmers can write full-fledged software applications. Node can call commands from JavaScript code, work with external libraries, and act as a web server. Node Advantages Node is easier to scale. When thousands of users connect to the server at the same time, Node works asynchronously, that is, it sets priorities and allocates resources more intelligently. Java, for example, allocates a separate stream for each connection. Features Asynchronous scripts based on events. All Node.js APIs are asynchronous: non-blocking downloads. In essence, this means that a Node based server never expects data to be returned from the API. After the call, the server proceeds to the next API, and the Node.js events notification mechanism helps the server to get a response from the previous call. Very fast. Being built on the Google Chrome V8 JavaScript browser, the Node.js library runs very quickly in code. Single-threaded but easily scalable - Node.js uses a single-threaded model with an event loop. The Event engine helps the server respond in a non blocking way and provides high scalability, unlike traditional servers that create limited threads for processing requests. Node uses a single-threaded program, and the same program can serve much more requests than traditional servers, such as the Apache HTTP Server. No buffering - Node.js apps do not buffer data. Apps simply output data in parts. Where is Node.js used? Node.js has established itself as an ideal technological solution in the following areas: Input / Output applications Streaming apps Intensive use of data in real time (DIRT) JSON API based applications Node is successfully used by such large companies as eBay,Microsoft, PayPal, General Electric, Uber,  GoDaddy, Wikipins, Yahoo!. Read how we build great apps with Node.js.
>>
View all IT-related terms
Results for "DEV"
Logo Magora LTD
close
Get in touch
Do you agree to the personal data processing?

Logo Magora LTD
close
Thank you very much.

Your registration to the webinar on the 27th of September at 2 p.m. BST was successfuly completed.
We will send you a reminder on the day before the event.
Magora team
Registration for a webinar

"Let Smart Bots Speed up your Business"
Date: 27.09.2018 Time: 2 p.m. BST
Do you agree to the personal data processing?