Software Defined Network Function
By decoupling the network functions from the under-laying hardware appliances, NFV provides flexible provisioning of software-based network functionalities on top of an optimally shared physical infrastructure. It addresses the problems of operational costs of managing and controlling these closed and proprietary appliances by leveraging low cost commodity servers.
“3D Password” is one of the good seminar topics for computer science technical seminars. Normally the authentication scheme the user undergoes is particularly very lenient or very strict. Throughout the years authentication has been a very interesting approach. With all the means of technology developing, it can be very easy for ‘others’ to fabricate or to steal identity or to hack someone’s password. Therefore many algorithms have come up each with an interesting approach toward calculation of a secret key. The algorithms are such based to pick a random number in the range of 106 and therefore the possibilities of the sane number coming are rare.
An Improved K-means Clustering Algorithm
The traditional K-means algorithm is a widely used clustering algorithm, with a wide range of applications. This ieee seminar paper introduces the idea of the K-means clustering algorithm, analysis the advantages and disadvantages of the traditional K-means clustering algorithm, elaborates the method of improving the K-means clustering algorithm based on improve the initial focal point and determine the K value. Simulation experiments prove that the improved clustering algorithm is not only more stable in clustering process, at the same time, improved clustering algorithm reduces or even avoid the impact of the noise data in the dataset object to ensure that the final clustering result is more accurate and effective.
An atm with an Eye
There is an urgent need for improving security in banking region. With the advent of ATM though banking became a lot easier it even became a lot vulnerable. The chances of misuse of this much hyped ‘insecure’ baby product (ATM) are manifold due to the exponential growth of ‘intelligent’ criminals day by day. ATM systems today use no more than an access card and PIN for identity verification. This situation is unfortunate since tremendous progress has been made in biometric identification techniques, including finger printing, retina scanning, and facial recognition. This IEEE seminar topic proposes the development of a system that integrates facial recognition technology into the identity verification process used in ATMs. The development of such a system would serve to protect consumers and financial institutions alike from fraud and other breaches of security.
“Blue brain” –The name of the world’s first virtual brain. That means a machine that can function as human brain. Today scientists are in research to create an artificial brain that can think, response, take decision, and keep anything in memory. The main aim of this seminar topic is to upload human brain into machine. So that man can think, take decision without any effort. After the death of the body, the virtual brain will act as the man. So, even after the death of a person we will not lose the knowledge, intelligence, personalities, feelings and memories of that man that can be used for the development of the human society. No one has ever understood the complexity of human brain. It is complex than any circuitry in the world. So, question may arise “Is it really possible to create a human brain?” The answer is “Yes”, because whatever man has created today always he has followed the nature. When man does not have a device called computer, it was a big question for all .But today it is possible due to the technology. Technology is growing faster than everything. IBM is now in research to create a virtual brain. It is called “Blue brain “. If possible, this would be the first virtual brain of the world.
Is it possible to create a computer, which can interact with us as we interact each other? For example imagine in a fine morning you walk on to your computer room and switch on your computer, and then it tells you “Hey friend, good morning you seem to be a bad mood today. And then it opens your mail box and shows you some of the mails and tries to cheer you. It seems to be a fiction, but it will be the life lead by “BLUE EYES” in the very near future. The basic idea behind this computer technology is to give the computer the human power. We all have some perceptual abilities. That is we can understand each other’s feelings. For example we can understand ones emotional state by analyzing his facial expression. If we add these perceptual abilities of human to computers would enable computers to work together with human beings as intimate partners. The “BLUE EYES” computer technology aims at creating computational machines that have perceptual and sensory ability like those of human beings.
A captcha is a program that can generate and grade tests that: (A) most humans can pass, but (B) current computer programs can’t pass. Such a program can be used to differentiate humans from computers and has many applications for practical security. This seminar topic
Cloud 9 Technology
“Cloud9” is one of the trending topics for computer science Technical Seminars. This is a new technology in computer science. Cloud9 is a cloud-based testing service that promises to make high-quality testing fast, cheap, and practical. Cloud9 runs on compute utilities like Amazon EC2 , and we envision the following three use cases: First, developers can upload their software to Cloud9 and test it swiftly, as part of their development cycle. Second, end users can upload recently downloaded programs or patches and test them before installing, with no upfront cost. Third, Cloud9 can function as a quality certification service, akin to Underwriters Labs , by publishing official coverage results for tested applications. In an ideal future, software companies would be required to subject their software to quality validation on such a service, akin to mandatory crash testing of vehicles. In the absence of such certification, software companies could be held liable for damages resulting from bugs. For a software testing service to be viable, it must aim for maximal levels of automation.
Creating enhanced maps
“Creating enhanced maps” is one of the simple topics for computer science Technical Seminars. The concept of enhanced maps (Emaps) was introduced with one main objective: It should characterize roads, first, with more completeness and, second, with more accuracy than standard maps to fulfill the requirements of new challenging road safety applications and advanced driver-assistance systems (ADAS). This technical seminar paper introduces a paradigm for Emap definition and creation on which every road lane is represented and topologically connected to the rest of lanes. Following this approach, a number of Emaps have been created in France, Germany, and Sweden. The experiments carried out in these test sites with the Emaps show the capability of our Emap definition to assist with the determination of the vehicle position at the lane level. Details of the processes of extraction and connection of the road segments are given in the core of this paper, as well as a discussion of the elaboration process and future guidelines in the conclusion.
Silent Sound Technology
‘Silent Sound’ technology aims to notice every movement of the lips and transform them into sounds, which could help people who lose voices to speak, and allow people to make silent calls without bothering others. Rather than making any sounds, your handset would decipher the movements your mouth makes by measuring muscle activity, then convert this into speech that the person on the other end of the call can hear. So, basically, it reads your lips. This new technology will be very helpful whenever a person loses his voice while speaking or allow people to make silent calls without disturbing others, even we can tell our PIN number to a trusted friend or relative without eavesdropping. At the other end, the listener can hear a clear voice. The awesome feature added to this technology is that “it is an instant polyglot” I.E, movements can be immediately transformed into the language of the user’s choice. This translation works for languages like English, French & German. But, for the languages like Chinese, different tones can hold many different meanings. This poses Problem said Wand. He also said that in five or may be in ten years this will Be used in every day’s technology.
SSL and TSL
This subject came into being with the advent of the Internet. There are three basic issues – confidentiality, integrity and availability. When an unauthorized person reads or copies information, it is known as loss of confidentiality. When the information is modified in an irregular manner, it is known as loss of integrity. When the information is erased or becomes inaccessible, it is known as loss of availability. Authentication and authorization are the processes of the Internet security system by which numerous organizations make information available to those who need it and who can be trusted with it. When the means of authentication cannot be refuted later, it is known as non-repudiation. Internet security can be achieved through use of antivirus software, which quarantines or removes malicious software programs. Firewalls can determine which particular websites can be viewed and block deleterious content.
Security and Privacy in Cloud Computing
Recent advances have given rise to the popularity and success of cloud computing. However, when outsourcing the data and business application to a third party causes the security and privacy issues to become a critical concern. Throughout the study at hand, the authors obtain a common goal to provide a comprehensive review of the existing security and privacy issues in cloud environments. We have identified five most representative security and privacy attributes (i.e., confidentiality, integrity, availability, accountability, and privacy-preservability). Beginning with these attributes, we present the relationships among confidentiality, availability and privacy-preservability , the vulnerabilities that may be exploited by attackers, the threat models, as well as existing defense strategies in a cloud scenario. Future research directions are previously determined for each attribute.
E-paper is a revolutionary material that can be used to make next generation; electronic displays. It is portable reusable storage and display medium that look like paper but can be repeatedly written one thousands of times. These displays make the beginning of a new area for battery power information applications such as cell phones, pagers, watches and hand-held computers etc. Two companies are carrying our pioneering works in the field of development of electronic ink and both have developed ingenious methods to produce electronic ink. One is E-ink, a company based at Cambridge, in U.S.A. The other company is Xerox doing research work at the Xerox’s Palo Alto Research Centre. Both technologies being developed commercially for electronically configurable paper like displays rely on microscopic beads that change color in response to the charges on nearby electrodes.
The idea behind EyeOS is that the whole system lives in the web browser. The client must have only a web browser to work with EyeOS and all its applications, including Office and PIM ones. This applies to for both modern and obsolete PC’s An Open Source Platform designed to hold a wide variety of Web Applications. EyeOS was thought of as a new definition of an Operating System, where everything inside it can be accessed from everywhere inside a Network. All you need to do is login into your EyeOS server with a normal Internet Browser, and you have access to your personal desktop, with your applications, documents, music, movies… just like you left it. EyeOS lets you upload your files and work with them no matter where you are. It contains applications like Word Processor, Address Book, PDF reader, and many more developed by the community.
Game Playing in AI
Game playing was one of the first tasks undertaken in Artificial Intelligence. Game theory has its history from 1950, almost from the days when computers became programmable. The very first game that is been tackled in AI is chess. Initiators in the field of game theory in AI were Konard Zuse (the inventor of the first programmable computer and the first programming language), Claude Shannon (the inventor of information theory), Norbert Wiener (the creator of modern control theory), and Alan Turing. Since then, there has been a steady progress in the standard of play, to the point that machines have defeated human champions (although not every time) in chess and backgammon, and are competitive in many other games.
The proposed recognition algorithm classifies the text of a sentence according to the following emotional categories: happiness, sadness, anger, fear, disgust, and surprise. This is one of the trending and new technology in computer science. The proposed algorithm estimates emotional weights for each emotional category (how intense the emotion is) in the form of a numerical vector. The vector is used to determine the dominant emotional type (the emotional type with the highest weight) and the overall emotional valence of a sentence (is the emotion positive, negative, or neutral). If the vector is zero or close to zero, the sentence is considered emotionally neutral. Users of our software may determine their criteria for neutrality. To recognize emotions in sentences, we use a hybrid of a keyword-spotting method and a rule-based method. The keyword-spotting approach is based on the use of a lexicon of words and expressions related to emotions. The main contribution is threefold. First, in order to construct a word lexicon, use of both the power of human judgment and the power of WordNet, a lexical database for English language. Specifically, a survey based word lexicon to automatically search WordNet for all semantic relatives of the initial word set. Second, we take into account “:)”s, “>:O”s, and “ROFL”s through an extensive emoticon lexicon. Third, trying to overcome some of the problems associated with keyword-spotting techniques with several heuristic rules. An argument that the proposed technique is suitable for analyzing fragmented online textual interaction that is abundant in colloquialisms.
Browser technology is changing very fast these days and we are moving from the visual paradigm to the voice paradigm. Voice browser is the technology to enter this paradigm. A voice browser is a “device which interprets a (voice) markup language and is capable of generating voice output and/or interpreting voice input, and possibly other input/output modalities. This seminar paper describes the requirements for two forms of character-set grammar, as a matter of preference or implementation; one is more easily read by (most) humans, while the other is geared toward machine generation.
Wireless Body Area Networks
Future communication systems are driven by the concept of being connected any-where at any time. This is not limited to even in medical area. Wireless medical communications assisting peoples work and replacing wires in a hospital are the applying wireless communications in medical healthcare. The increasing use of wireless networks and the constant miniaturization of electrical devices has empowered the development of wireless body area networks(WBANs).In these networks various sensors are attached on clothing or on the body or even implanted under the skin. These devices provide continuous health monitoring and real-time feedback to the user or medical personnel. The wire-less nature of the network and the wide variety of sensors offer numerous new, practical and innovative applications to improve healthcare and the quality of life. The sensor measures certain parameters of human body, either externally or internally. Examples include measuring the heartbeat, body temperature or recording a prolonged electrocardiogram (ECG).
B-Tree File System
The design goal is to work well for a wide variety of workloads, and to maintain performance as the filesystem ages. This is in contrast to storage systems aimed at a particular narrow use case. BTRFS is intended to serve as the default Linux filesystem; it is expected to work well on systems as small as a smartphone, and as large as an enterprise production server. As such, it must work well on a wide range of hardware.
Keyboards Without Keyboards
Input to small devices is becoming an increasingly crucial factor in development for the ever-more powerful embedded market. Speech input promises to become a feasible alternative to tiny keypads, yet its limited reliability, robustness, and flexibility render it unsuitable for certain tasks and/or environments. Various attempts have been made to provide the common keyboard metaphor without the physical keyboard, to build “virtual keyboards”. This promises to leverage our familiarity with the device without incurring the constraints of the bulky physics. This research surveys technologies for alphanumeric input devices and methods with a strong focus on touch-typing. We analyze the characteristics of the keyboard modality and show how they contribute to making it a necessary complement to speech recognition rather than a competitor
A Distributed MAC Design
Wireless USB gives consumers an easy, secure way to connect their PCs, CE, and mobile devices without a cable – without sacrificing speed. Wireless USB enables products from the PC, CE, and mobile industries to connect wirelessly at up to 480 Mbps at 3 meters and 110 Mbps at 10 meters. the requirement of high quality multimedia service in wireless home network environment is more increasing in recent years. WiMedia Alliance has developed the specifications of the PHY, MAC, and convergence layers for UWB (Ultra Wide Band) systems with participation from more than 170 companies.
IMC of Landslide Using LIDAR
Landslide identification and hazard mapping using light detection and ranging (LiDAR) have proven successful in Kentucky and other landslide prone areas of the United States, such as Oregon, Washington, and North Carolina (Burns and Madin, 2009; McKenna and others, 2008; Wooten and others, 2007) . The purpose of this project was to develop a methodology for using LiDAR data to document preexisting landslides in Kenton and Campbell Counties, Kentucky (fig. 1). To do this, potential landslides were mapped and digitized that previously were not visible on existing maps or coarse digital elevation models (DEMs). Field verification of these mapped locations, where possible, also was conducted. Using high-resolution LiDAR to identify potential landslides provides a framework for analyzing landslide data that are crucial to understanding landslide susceptibility and reducing long-term losses.
A common aspect of cellular-assisted and –controlled short-range communications technologies, including the underlay, overlay, or unlicensed spectrum based approaches, is that they rely on the availability and involvement of the cellular infrastructure. By themselves, these technologies do not provide a means for a graceful degradation of connectivity or content access services in case the cellular infrastructure becomes partially or completely damaged or dysfunctional. Ideally, short-range or local communication should be maintained in the absence of infrastructure nodes, but should be able to take advantage of cellular functionality when parts of or the whole infrastructure remains intact.
In statistical machine learning, a major issue is the selection of an appropriate feature space where input instances have desired properties for solving a particular problem. For example, in the context of supervised learning for binary classification, it is often required that the two classes are separable by a hyper plane. In the case where this property is not directly satisfied in the input space, one is given the possibility to map instances into an intermediate feature space where the classes are linearly separable. This intermediate space cans either be specified explicitly by hand-coded features, be defined implicitly with a so-called kernel function, or be automatically learned. In both of the first cases, it is the user’s responsibility to design the feature space. This can incur a huge cost in terms of computational time or expert knowledge, especially with highly dimensional input spaces, such as when dealing with images.
The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember. To address this problem, some researchers have developed authentication methods that use pictures as passwords. Here, we conduct a comprehensive survey of the existing graphical password techniques. We classify these techniques into two categories: recognition-based and recall-based approaches.In this seminar topic we discuss the strengths and limitations of each method and point out the future research directions in this area.
IP spoofing is a method of attacking a network in order to gain unauthorised access. The attack is based on the fact that Internet communication between distant computers is routinely handled by routers which find the best route by examining the destination address, but generally ignore the origination address. The origination address is only used by the destination machine when it responds back to the source. In a spoofing attack, the intruder sends messages to a computer indicating that the message has come from a trusted system. To be successful, the intruder must first determine the IP address of a trusted system, and then modify the packet headers to that it appears that the packets are coming from the trusted system.
This is one amongst the latest IEEE seminar topics. This staggering growth of the Internet is driving demand for higher-speed Internet-access services, leading to a parallel growth in broadband adoption. In less than a decade, broadband subscription worldwide has grown from virtually zero to over 200 million. Many industry observers believe so. Before we delve into broadband wireless, let us review the state of broadband access today. Digital subscriber line (DSL) technology, which delivers broadband over twisted-pair telephone wires, and cable modem technology, which delivers over coaxial cable TV plant, is the predominant mass-market broadband access technologies today.
Object Tracking in Video Scenes
Object tracking in video processing follows the segmentation step and is more or less equivalent to the ‘recognition’ step in the image processing. Detection of moving objects in video streams is the first relevant step of information extraction in many computer vision applications, including traffic monitoring, automated remote video surveillance, and people tracking. There are basically three approaches in object tracking. Feature-based methods aim at extracting characteristics such as points, line segments from image sequences, tracking stage is then ensured by a matching procedure at every time instant. Differential methods are based on the optical flow computation, i.e. on the apparent motion in image sequences, under some regularization assumptions. The third class uses the correlation to measure inter-image displacements. Selection of a particular approach largely depends on the domain of the problem.
Optical computing includes the optical calculation of transforms and optical pattern matching. Emerging technologies also make the optical storage of data a reality. The speed of computers was achieved by miniaturising electronic components to a very small micron-size scale, but they are limited not only by the speed of electrons in matter (Einstein’s principle that signals cannot propagate faster than the speed of light) but also by the increasing density of interconnections necessary to link the electronic gates on microchips. The optical computer comes as a solution of miniaturisation problem. In an optical computer, electrons are replaced by photons, the sub- atomic bits of electromagnetic radiation that make up light..
Optimizing Compiler for CELL Processor
Developed for multimedia and game applications, as well as other numerically intensive workloads, the CELL processor provides support both for highly parallel codes, which have high computation and memory requirements, and for scalar codes, which require fast response time and a full-featured programming environment. This first generation CELL processor implements on a single chip a Power Architecture processor with two levels of cache, and eight attached streaming processors with their own local memories and globally coherent DMA engines. In addition to processor-level parallelism, each processing element has a Single Instruction Multiple Data (SIMD) unit that can process from 2 double precision floating points up to 16 bytes per instruction. This IEEE seminar paper describes, in the context of a research prototype, several compiler techniques that aim at automatically generating high quality codes over a wide range of heterogeneous parallelism available on the CELL processor. Techniques include compiler-supported branch prediction, compiler-assisted instruction fetch, generation of scalar codes on SIMD units, automatic generation of SIMD codes, and data . Results indicate that significant speedup can be achieved with a high level of support from the compiler.
“Resource Optimization” is one of the data oriented topics for computer science Technical Seminars. Virtualization addresses IT’s most pressing challenge: the infrastructure sprawl that compels IT departments to channel 70 percent of their budget into maintenance, leaving scant resources for business-building innovation. The difficulty stems from the architecture of today’s X86 computers: they’re designed to run just one operating system and application at a time. As a result, even small data centers have to deploy many servers, each operating at just five percent to 15 percent of capacity—highly inefficient by any standard. Virtualization software solves the problem by enabling several operating systems and applications to run on one physical server or “host.” Each self-contained “virtual machine” is isolated from the others, and uses as much of the host’s computing resources as it requires.
Over the last several years, a loosely defined collection of computer software known as “Spyware” has become the subject of growing public alarm. Computer users are increasingly finding programs on their computers that they did not know were installed and that they cannot uninstall, that create privacy problems and open security holes that can hurt the performance and stability of their systems, and that can lead them to mistakenly believe that these problems are the fault of another application or their Internet provider. The term “spyware” has been applied to everything from keystroke loggers, to advertising applications that track users’ web browsing, to web cookies, to programs designed to help provide security patches directly to users. More recently, there has been particular attention paid to a variety of applications that piggyback on peer-to-peer file-sharing software and other free downloads as a way to gain access to people’s computers. This report focuses primarily on these so-called “adware” and other similar applications, which have increasingly been the focus of legislative and regulatory proposals.
Software Defined Networking (SDN) is an architectural approach that optimizes and simplifies network operations by more closely binding the interaction (i.e., provisioning, messaging, and alarming) among applications and network services and devices, whether they be real or virtualized. It often is achieved by employing a point of logically centralized network control—which is often realized as an SDN controller—which then orchestrates, mediates, and facilitates communication between applications wishing to interact with network elements and network elements wishing to convey information to those applications. The controller then exposes and abstracts network functions and operations via modern, application-friendly and bidirectional programmatic interfaces.
Emerging requirements associated with digital mapping pose a broad set of challenging problems in image understanding research. Currently several leading research centers are pursuing the development of new techniques for automated feature extraction; for example, road tracking, urban scene generation, and edge-based stereo compilation. Concepts for map-guided scene analysis are being defined which will lead to further work in automated techniques for spatial database validation, revision and intensification. This seeks to describe on-going activity in this field and suggest areas for future research. Research problems range from the organization of large-scale digital image/map databases for tasks such as screening and assessment, to structuring spatial knowledge for image analysis tasks, and the development of specialized “expert” analysis components and their integration into automated systems. Significantly, prototype image analysis workstations have been configured for both film-based and digital image exploitation which interface conventional image analysts and extracted spatial data in computer assisted systems. However, the state-of-the-art research capabilities are fragile, and successful concept demonstrations require thoughtful analysis from both the mapping and image understanding communities.
Programs written in C and C++ are susceptible to memory errors,including buffer overflows and dangling pointers. These errors,which can lead to crashes, erroneous execution, and security vulnerabilities, are notoriously costly to repair. Tracking down their location in the source code is difficult, even when the full memory state of the program is available. Once the errors are finally found, fixing them remains challenging: even for critical security-sensitive bugs, the average time between initial reports and the issuance of a patch is nearly one month. We present Exterminator, a system that automatically corrects heap-based memory errors without programmer intervention. Exterminator exploits randomization to pinpoint errors with high precision. From this information, Exterminator derives runtime patches that fix these errors both in current and subsequent executions. In addition, Exterminator enables collaborative bug correction by merging patches generated by multiple users. We present analytical and empirical results that demonstrate Exterminator’s effectiveness at detecting and correcting both injected and real faults.
Fake Access Point Detector
Wireless access points are today popularly used for the convenience of internet use. The growing acceptance of wireless local area networks (WLAN) presented different risks of wireless security attacks. The presence of fake access points is one of the most challenging network security concerns for network administrators. Fake access points, if undetected, can steal sensitive information on the network. Most of the current solutions to detect fake access points are not automated and are dependent on a specific wireless technology. Fake access point is one of the serious threats in wireless local area network. Here we have presented our propose solution to detect fake access point & related risk assessment is analysed .Most of the current approaches to detecting fake access point are rudimentary and easily evaded by hacker. In our solution there is no need of acquire new RF devices for detecting fake access point our solution is designed to work with current infrastructure. Our solution is effective and reliable. It is designed to detect inside attacks in organisation.
Gi-Fi will helps to push wireless communications to faster drive. For many years cables ruled the world. Optical fibers played a dominant role for its higher bit rates and faster transmission. But the installation of cables caused a greater difficulty and thus led to wireless access. The foremost of this is Bluetooth which can cover 9-10mts. Wi-Fi followed it having coverage area of 91mts. No doubt, introduction of Wi-Fi wireless networks has proved a revolutionary solution to “last mile” problem. However, the standard’s original limitations for data exchange rate and range, number of channels, high cost of the infrastructure have not yet made it possible for Wi-Fi to become a total threat to cellular networks on the one hand, and hard-wire networks, on the other. But the man’s continuous quest for even better technology despite the substantial advantages of present technologies led to the introduction of new, more up-to-date standards for data exchange rate i.e., Gi-Fi.
Teradata is an open system, compliant with ANSI standards. It is currently available on UNIX MP-RAS and Windows 2000 operating systems. Teradata is a large database server that accommodates multiple client applications making inquiries against it concurrently. Various client platforms access the database through a TCP-IP connection across an IBM mainframe channel connection. The ability to manage large amounts of data is accomplished using the concept of parallelism, wherein many individual processors perform smaller tasks concurrently to accomplish an operation against a huge repository of data. To date, only parallel architectures can handle databases of this size.
Data transmission, digital transmission or digital communications is the physical transfer of data (a digital bit stream) over a point-to-point or point-to-multipoint communication channel. Examples of such channels are copper wires, optical fibers, wireless communication channels, and storage media. The data is represented as an electromagnetic signal, such as an electrical voltage, radio wave, microwave or infrared signal. For high rate of data transmission we commonly use optical fibers. Optical fibers are widely used in fiber-optic communications, which permits transmission over longer distances and at higher bandwidths (data rates) than other forms of communication. Fibers are used instead of metal wires because signals travel along them with less loss and are also immune to electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so they can be used to carry images, thus allowing viewing in tight spaces. But this technology also have many disadvantages such as price, fragility etc.
Xmax is one of the latest seminar topics. Wireless operators today are facing a dilemma. Customers are demanding more and more data applications to be delivered on the go. However, the operator is using scarce expensive licensed spectrum that is overburdened with delivering core voice services. Support for voice, data, location awareness, chat and other applications are required for customers that are very mobile. The operator is facing a choice of acquiring more licensed spectrum (if any is available) or losing customers due to demand for advanced services. xG Technology has a solution to this dilemma of overwhelming demand for advanced applications versus lack of spectrum.
Road characterization of Digital Maps
Here, implementation a novel system to detect, maintain and warn the forthcoming road inconsistencies. In hilly, fog affected and unmaintained areas, vehicles/ motorists are more prone to accidents which have been proved to be fatal in the past. There is insufficient knowledge about the location and type of these inconsistencies as well. Hence, this system provides a two-step solution by eventually warning the motorist about the inconsistency. This is also a self-learning and recursive system which requires less human involvement. The availability of the technologies in the present automobiles is sufficient to create a warning system. But the integration of these technologies inspired the paper to develop the system.
Biometric Electronic Wallet
Bitcoin is an experimental, decentralized digital currency that enables instant payments to anyone, anywhere in the world. Bitcoin uses peer-to-peer technology to operate with no central authority. Wallet is the place where Bitcoins are stored. Once Bitcoin wallet is installed on a computer or mobile phone, it will generate initial Bitcoin address and then the user can create an address for each transaction. Wallet first generates a private key and next it will converts private key to Bitcoin address. Wallet keeps track of private keys, usually by storing them in an encrypted wallet file, either on hard drive, on a server on the Internet, or elsewhere.
HGI with 3D-Environment
Virtual Environment (VE) system offers a natural and intelligent user interface. Hand gesture recognition is more efficient and easier interaction in VE than human-computer interface (HCI) devices like keyboards and mouse. We propose a hand gesture recognition interface that generates commands to control objects directly in a game. Our novel hand gesture recognition system utilizes both Bag-of features and Support Vector Machine (SVM) to realize user-friendly interaction between human and computers. The HCI based on hand gesture recognition interacts with objects in a 3D virtual environment. With this interface, the user can control and direct a helicopter by a set of hand gesture commands controlling the movements of the helicopter. Our system shows the hand gesture recognition interface can attain an enhanced and more intuitive and flexible interaction for the user than other HCI devices.
MAS to Windows Azure Cloud
Schema migration involves the migration of application schema to clouds. Presently various script generation techniques are there which are used for schema migration, which also sometimes require the application architecture to be changed to make it compatible with cloud platform. This paper shows an approach using user templates for capturing the schema structure and access control mechanism for migration of application schema without using script generation tools. Further it gives the necessary algorithm to show the feasibility of its implementation.
Fake Access Point Detector
In today’s world the Internet has become essential requirement for everyone, everyone wants to remain connected with the world. Many organizations use Wi-Fi to provide access signals to internet and intranet enabling the flexible workforce. In the field of communications, Banking, industry. The users are most frequently using the internet and information transmitted by the users is broadcasted through the air. Every user within the range of Wi-Fi signal can be easily connect the network and sniffed the information using fake access point. Fake AP is an unauthorized access point plugged into a corporate network, posing a serious security threat. In our project we propose the detection of Fake AP and analyzing related risk assessment. Also provide secure and effective communication. WLAN Security technology has major use in many fields. Wireless LAN has a wide range of applications due to its flexibility and easy access. The use of public Wi-Fi has reached at a level that is difficult to avoid. According to the poll conducted by Kaspersky’s global Facebook pages 32 percent of the more than 1600 respondents said that they are using public Wi-Fi regardless of the security concerned.