Tuesday, 28 February 2017

Cloud computing

cloud computing
Cloud computing is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services),[1][2] which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers[3] that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.
What is the cloud? Where is the cloud? Are we in the cloud now? These are all questions you've probably heard or even asked yourself. The term "cloud computing" is everywhere.
In the simplest terms, cloud computing means storing and accessing data and programs over the Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet. It goes back to the days of flowcharts and presentations that would represent the gigantic server-farm infrastructure of the Internet as nothing but a puffy, white cumulus cloud, accepting connections and doling out information as it floats.
What cloud computing is not about is your hard drive. When you store data on or run programs from the hard drive, that's called local storage and computing. Everything you need is physically close to you, which means accessing your data is fast and easy, for that one computer, or others on the local network. Working off your hard drive is how the computer industry functioned for decades; some would argue it's still superior to cloud computing, for reasons I'll explain shortly.

Sunday, 26 February 2017

what is digital image processing

what is digital image processing
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signalprocessingdigital image processing has many advantages over analog image processing.

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form ofmultidimensional systems.

Digital camera images[edit]

Digital cameras generally include specialized digital image processing hardware – either dedicated chips or added circuitry on other chips – to convert the raw data from theirimage sensor into a color-corrected image in a standard image file format

what is image processing

what is image processing
In imaging scienceimage processing is processing of images using mathematical operations by using any form of signal processing for which the input is an image, a series of images, or a video, such as a photograph or video frame; the output of image processing may be either an image or a set of characteristics or parameters related to the image.[1] Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. Images are also processed as three-dimensional signals with the third-dimension being time or the z-axis.


Image analysis
Image analysis is the extraction of meaningful information from images; mainly from digital images by means of digital image processing techniques.[1] Image analysis tasks can be as simple as reading bar coded tags or as sophisticated as identifying a person from their face.
computer Image Analysis
Computer Image Analysis largely contains the fields of computer or machine vision, and medical imaging, and makes heavy use of pattern recognitiondigital geometry, andsignal processing. This field of computer science developed in the 1950s at academic institutions such as the MIT A.I. Lab, originally as a branch of artificial intelligence androbotics.

Sharpening and softening images

Graphics programs can be used to both sharpen and blur images in a number of ways, such as unsharp masking or deconvolution.[2] Portraits often appear more pleasing when selectively softened (particularly the skin and the background) to better make the subject stand out.[citation needed] This can be achieved with a camera by using a large aperture, or in the image editor by making a selection and then blurring it. Edge enhancement is an extremely common technique used to make images appear sharper, although purists frown on the result as appearing unnatural

Selecting and merging of images

Many graphics applications are capable of merging one or more individual images into a single file. The orientation and placement of each image can be controlled.

Enhancing images

In computer graphics, the process of improving the quality of a digitally stored image by manipulating the image with software. It is quite easy, for example, to make an image lighter or darker, or to increase or decrease contrast. Advanced photo enhancement software also supports many filters for altering images in various ways.[1] Programs specialized for image enhancement are sometimes called image editors.

tamil

வணக்கம் நண்பா
                தமிà®´் பேசுà®®் நன்பகளோ
                                  இனீய வணக்கம்
                                                   à®¨ான் உங்கள் நண்பன் 

java

java
Java is a general-purpose computer programming language that is concurrentclass-basedobject-oriented,[14] and specifically designed to have as few implementation dependencies as possible. It is intended to let application developers "write once, run anywhere" (WORA),[15] meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.[16] Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless ofcomputer architecture. As of 2016, Java is one of the most popular programming languages in use,[17][18][19][20] particularly for client-server web applications, with a reported 9 million developers.[21] Java was originally developed by James Gosling at Sun Microsystems (which has since been acquired by Oracle Corporation) and released in 1995 as a core component of Sun Microsystems' Java platform. The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them.

Versions

Main article: Java version history
As of 2015, only Java 8 is officially supported. Major release versions of Java, along with their release dates:
  • JDK 1.0 (January 23, 1996)[38]
  • JDK 1.1 (February 19, 1997)
  • J2SE 1.2 (December 8, 1998)
  • J2SE 1.3 (May 8, 2000)
  • J2SE 1.4 (February 6, 2002)
  • J2SE 5.0 (September 30, 2004)
  • Java SE 6 (December 11, 2006)
  • Java SE 7 (July 28, 2011)
  • Java SE 8 (March 18, 2014)
  • Use outside of the Java platform

  • The Java programming language requires the presence of a software platform in order for compiled programs to be executed. Oracle supplies the Java platform for use with Java. The Android SDK, is an alternative software platform, used primarily for developing Android applications.

Android

what is ardroid 
Android (stylized as android) is a mobile operating system developed by Google, based on the Linux kernel and designed primarily for touchscreen mobile devices such as smartphones and tablets. Android's user interface is mainly based ondirect manipulation, using touch gestures that loosely correspond to real-world actions, such as swiping, tapping and pinching, to manipulate on-screen objects, along with a virtual keyboard for text input. In addition to touchscreen devices, Google has further developed Android TV for televisions, Android Auto for cars, and Android Wear for wrist watches, each with a specialized user interface. Variants of Android are also used on notebooksgame consolesdigital cameras, and other electronics

hardwave 

The main hardware platform for Android is the ARM (ARMv7 and ARMv8-A architectures), with x86 and MIPS architectures also officially supported in later versions of Android. The unofficial Android-x86 project provided support for the x86 architectures ahead of the official support.[6][90] MIPS architecture was also supported before Google did. Since 2012, Android devices with Intelprocessors began to appear, including phones[91] and tablets. While gaining support for 64-bit platforms, Android was first made to run on 64-bit x86 and then on ARM64. Since Android 5.0 "Lollipop", 64-bit variants of all platforms are supported in addition to the32-bit variants.

Software stack

On top of the Linux kernel, there are the middlewarelibraries and APIs written in C, and application softwarerunning on an application framework which includes Java-compatible libraries. Development of the Linux kernel continues independently of other Android's source code bases.

Speech recognition

Speech recognition (SR) is the inter-disciplinary sub-field of computational linguistics that develops methodologies and technologies that enables the recognition andtranslation of spoken language into text by computers. It is also known as "automatic speech recognition" (ASR), "computer speech recognition", or just "speech to text" (STT). It incorporates knowledge and research in the linguisticscomputer science, and electrical engineering fields.

Speech recognition is the ability of a machine or program to identify words and phrases in spoken language and convert them to a machine-readable format. Rudimentary speech recognition software has a limited vocabulary of words and phrases, and it may only identify these if they are spoken very clearly.

how to work on speech recognition
An ADC translates the analog waves of your voice into digital data by sampling the sound. The higher the sampling and precision rates, the higher the quality. To convert speech to on-screen text or a computer command, a computer has to go through several complex steps. When you speak, you create vibrations in the air.