About IJSRD: Impact Factor 2.39

IJSRD is a leading e-journal, under which we are encouraging and exploring newer ideas of current trends in Engineering and Science by publishing papers containing pure knowledge.

Design and Simulation of Radix-8 Booth Encoder Multiplier for Signed and Unsigned Numbers

IJSRD Found one good research work on Electronics & Communication research area.

Abstract–This paper presents the design and simulation of signed-unsigned Radix-8 Booth Encoding multiplier. The Radix-8 Booth Encoder circuit generates n/3 the partial products in parallel. By extending sign bit of the operands and generating an additional partial product the signed of unsigned Radix-8 BE multiplier is obtained. The Carry Save Adder (CSA) tree and the final Carry Look ahead (CLA) adder used to speed up the multiplier operation. Since signed and unsigned multiplication operation is performed by the same multiplier unit the required hardware and the chip area reduces and this in turn reduces power dissipation and cost of a system. The simulation is done through Verilog on xiling13.3 platform which provide diversity in calculating the various parameters.

Keywords: Array multiplier, Baugh-Woolley multiplier, Braun array multiplier, CLA, CSA, Radix-8 Booth Encoding multiplier, Signed-unsigned


Booth’s algorithm involves repeatedly adding one of two predetermined values to a product P, and then performing a rightward arithmetic shift on P.Radix-8 Booth encoding is most often used to avoid variable size partial product arrays. Before designing Radix-8 BE, the multiplier has to be converted into a Radix-8 number by dividing them into four digits respectively according to Booth Encoder Table given afterwards. Prior to convert the multiplier, a zero is appended into the Least Significant Bit (LSB) of the multiplier



For more information Click Here

Emergent Artificial Intelligence


What happens when a computer can learn on the job?

Artificial intelligence (AI) is, in simple terms, the science of doing by computer the things that people can do. Over recent years, AI has advanced significantly: most of us now use smartphones that can recognize human speech, or have travelled through an airport immigration queue using image-recognition technology. Self-driving cars and automated flying drones are now in the testing stage before anticipated widespread use, while for certain learning and memory tasks, machines now outperform humans. Watson, an artificially intelligent computer system, beat the best human candidates at the quiz game Jeopardy.

Artificial intelligence, in contrast to normal hardware and software, enables a machine to perceive and respond to its changing environment. Emergent AI takes this a step further, with progress arising from machines that learn automatically by assimilating large volumes of information. An example is NELL, the Never-Ending Language Learning project from Carnegie Mellon University, a computer system that not only reads facts by crawling through hundreds of millions of web pages, but attempts to improve its reading and understanding competence in the process in order to perform better in the future.

Like next-generation robotics, improved AI will lead to significant productivity advances as machines take over – and even perform better – at certain tasks than humans. There is substantial evidence that self-driving cars will reduce collisions, and resulting deaths and injuries, from road transport, as machines avoid human errors, lapses in concentration and defects in sight, among other problems. Intelligent machines, having faster access to a much larger store of information, and able to respond without human emotional biases, might also perform better than medical professionals in diagnosing diseases. The Watson system is now being deployed in oncology to assist in diagnosis and personalized, evidence-based treatment options for cancer patients.

Long the stuff of dystopian sci-fi nightmares, AI clearly comes with risks – the most obvious being that super-intelligent machines might one day overcome and enslave humans. This risk, while still decades away, is taken increasingly seriously by experts, many of whom signed an open letter coordinated by the Future of Life Institute in January 2015 to direct the future of AI away from potential pitfalls. More prosaically, economic changes prompted by intelligent computers replacing human workers may exacerbate social inequalities and threaten existing jobs. For example, automated drones may replace most human delivery drivers, and self-driven short-hire vehicles could make taxis increasingly redundant.

On the other hand, emergent AI may make attributes that are still exclusively human – creativity, emotions, interpersonal relationships – more clearly valued. As machines grow in human intelligence, this technology will increasingly challenge our view of what it means to be human, as well as the risks and benefits posed by the rapidly closing gap between man and machine.





No Signal Yet From Philae (But ESA Isn’t Giving Up)

Lights in the Dark

Philae's view from its current location on comet 67P/C-G. ( ESA/Rosetta/Philae/CIVA)Philae’s view from its current location on comet 67P/C-G, captured by one of its three CIVA cameras. (ESA/Rosetta/Philae/CIVA)

The first attempt by ESA and Rosetta to hear back from Philae has turned up only radio silence – but that doesn’t necessarily mean the lander is on permanent shutdown. It may just be that it’s still too cold and dark where Philae is to have sufficiently warmed up its components for reactivation.

“It was a very early attempt; we will repeat this process until we receive a response from Philae,” said DLR (Germany’s Aerospace agency) Project Manager Stephan Ulamec. “We have to be patient.”

After landing in an as yet unconfirmed location on comet 67P on November 12, 2014, Philae performed all of its primary science tasks before running out of battery power and entering a hibernation “safe” mode. Its reawakening is anticipated by mission engineers as the comet gets closer to the…

View original post 278 more words

#IJSRD Computational Modeling and Docking for H1N1virsus using Bioinformatics

Computational Modeling and Docking for  H1N1virsus using Bioinformatics

Abstract— This  research work will be carried out using the retrieval of H1N1virsus  Hemagglutinin protein   sequence of Influenza A virus[AEN79399] in FASTA format, these sequence are retrieve from the NCBI database that have unknown structure. The aim of present study was to carry out the computational modeling study of the mentioned protein sequence using 3DJigsawn protein comparative modeling server, Verify-3D structure evaluation server, CHIMERA. Validation process done by Verify3D analyzes, the compatibility of an atomic model (3D) with its own amino acid sequence (1D).Collect all the results for possible Ligand/drug candidate finding in-silico. The last step is Docking study for H1N1 protein with Tamiflu (possible Ligand) that help for finding a correct medicine.

For more details Click on below links :


Mech Department For IJSRD