Friday, April 5, 2019
Differences Between Linear Block And Convolutional Codes Computer Science Essay
Differences Between Linear Block And Convolutional Codes Computer Science Essay in the main the coding theory is the detailed analysis of the properties of the cyphers and their suitable quality for the identified application. Codes atomic list 18 mainly employ in cryptography, illusion correction , info compression and now a days it is in both case apply for the network coding.this coding universally involves with the running(a) close up laws, commotional mandates, turbo codes respectively.LINEAR BLOCK CODESIn coding theory, the elongated baffle code mostly referred as a error correcting code for which the obtained resultant codeword is the running(a) cabal of whatever two code speech communication. In simple words, the analogue fend off code possesss the linearity property that is the total of any two codewords is to a fault a codeword. These linear engorge codes atomic number 18 divided in to the pile odes and convolutional codes, eventhough turbo code s is the confederacy of these two types. More efficient encoding and decodig algorithmic ruleic rules be provided by the linear codes when comp ard with the other codes.(http// en.wikipedia.org/wiki/linear codes).CONVOLUTIONAL CODESThe main principle involved in the convolutional code is the weighted sum of the various scuttlebutt inwardness symbols is the resultant codeword symbol. This resemblance the convolution dropd in the LTI systems where we find the output of a syatem by well-educated the zest response and the respective input. So hence the output of a convolutional en coder foundation be obtained by the convolution of the input bits with the states of the convolution encoder understands.(http//en.wikipedia.org/wiki/convolutional codes).TURBO CODESIn 1993, turbo codes be veritable which are the class of high doing forward error correction codes and they are overly the first realistic codes to come nearer to the chgannel faculty for the code step at which r eliable communication is still possible while a proper(postnominal) identified noise level is attached. Berrou glavieux and thitimajshima introduced these turbo codes in 1993 in their paper named near Shannon do error correcting and decryption turbo codes. Once after the introduction of these twin turbo codes many another(prenominal) other types are discovered which mainly includes double accumulate codes, successive versions. Even the conventional FEC systems are as well been applied by the repetitive turbo decipher methods.(http//en.wikipedia.org/wiki/turbo )2. CORE OF THE PAPER1. PRINCIPLES OF THE LINEAR BLOCK CODESA be quiet code of space n and codewors is verbalize to be a linear(n, k) code if and only if the k dimensional subspace of the vector space is formed by all the codewords of all the n- tuples over GF(2). A linear code of length n and rank K is the linear subspace C with dimension K of the vector space where is the finite field consisting of the q elemen ts. Such tytpe of a code with parameter q is called a q ary code. The code is said to be as binary code or ternary code if and only ifq =2 or q=3 respectively. Linear impede codes are shortly described by their symbol alphabets (example binary, ternary) and with parameters (n,m, ) wheren is the codeword length in symbols.m is the total number of kickoff symbols which are employ for encoding.is the codes minimum overplay distance.Linear block codes are of many types they arecyclical codesRepetition codesParity codesPolynomial codesReed Solomon codesAlgebraic geometric codesReed muller codesPerfect codes.A. author MATRIX AND PARITY CHECK MATRIXSince the linear codes could be considered as a linear subspace C of , so any codeword C arouse be represented as a linear combination of a set of basis vectors such that ,C= ++.= mGWhere G is the generator intercellular substance and m is the message.B. THEORITICAL CONCEPTSAs linear subspace of , the span of the minimal set of codewords can be represented by the blameless code word C. These obtained codewords are collected and combined in the rows of the matrix G kn take as generator matrix for the code C. Theoretically, it is the standard form of is given as G=(/A), where is the kk identity matrix and A is k(n-k) matrix.A check matrix is defined as the matrix which represents the linear perish H whose kernel is C. The generating matrix G in standard form, G=(/A), with C as a code then H=(/) is a check matrix for C.C .HAMMING CODESHamming codes are the first codes developed for the purpose of the error correction. These are widely used in digital communication systems. For any positive degree integer r=2, there exists a -r-1,3 hamming code.Example given the fol execrableing generator matrix and parity check matrix, the linear block code is 7,4,3 hamming code.G= ,D. HADAMARD CODEHadamard codes is capable of correcting many errors and is a , r, linear code. Hadamard code has minimum distance and therefore can cor rect -1 errors.Example The linear block code with the following generator matrix is a 8,3,42 Hadamard codeE..MERITSLinear codes are the very tumescent(p) calss of the error correcting codes. It generally includes the simple description, easy procedure for encoding, nice properties, conceptually easy decoding.If C is a linear code then the minimum distance d(c) is easy to compute.Linear codes possesss simple specifications.To specify the (n,k) linear code it is sufficient to list K lin betimes independent codewords where as for the non linear code words generally we have to represent with all the code words.The most important linear block codes areHamming codes, cyclic codes, reed Solomon codes, BCH codes, LDPC codes, turbo codes respectively.F. DEMERITSThe main drawback of the hamming code is that it has the fixed hamming distance and to a fault it is very difficult to implement coders for large block. The detection of two error bits and ability to correct the individual errored bit can be obtained from the fixed hamming distance.if we choose a code which generally allows the implementor to consider the desired hamming distance will be useful or beneficial. Hammer coder practical largest possible allows for codewords of nearly 31bits only.G. APPLICATIONS(1) These linear block properties are used in many applications. Trellis moldable is gene gaitd by using the syndrome coset uniqueness property of linear block codes. (2) these are also used in sensor networks for distributed sourcecoding. (3)by using these codes phase shift can be easy corrected and detected and also multiple signals can be easily sent from onechannel. (4) some other applications include runny border systems which are used in CDMA.( fictitious characters William E.ryan and shu lin(2009) channel codes classical and modern, Cambridge university press.,Thomas M. corer and joy A. Thomas(1991) elements of information theory john wiley and sons).2.PRINCIPLES OF CONVOLUTIONAL CODESConvolution al codes comes nether a category of error correction codes in which an n bit symbol consists of each m bit information symbol to be encoded where coderate is m/n (n=m) and the transformation acts as a function of the last K information symbols, here the bashfulness lenghht of the code is de noned by K.A. .Convolutional encoderTo encode data convolutinoally, we demoralise with K memory reegisters holding one input bit each. Until other measure specified , the 0 is the initial value af all memory registers. The encoder includes n-modulo 2 adders( it can be put effort with ace Boolean XOR gate ) and n generator polynomials- each adder has one of them. The left most register is feded with the an input bit m. By using those existing values in the remaining registers and the respective generatot polynomials the n bits are the encoder outputs. By bit shifting all register values to their correctly and have to search for its following input input bit. The encoder continues output unt il all registers have returned to the zero state if there are no remaining input bits. The at a lower place see shows the rate 1/3 of the encoder with k value ( constraint length) of 3. And also the generator polynomials are =(1,1,1), =(0,1,1) and =(1,0,1)The calculation of the output bits are as followsn1 = m1 + m0 + m-1n2 = m0 + m-1n3 = m1 + m-1.Img.1. Rate 1/3 non-recursive, non-systematic convolutional encoder with constraint length 3.B. RECURSIVE AND NON RECURSIVE CODESThe above figure shows the encoder where as the on a lower floor one shows the recursive one respectively,Rate 1/2 recursive, systematic convolutional encoder with constraint length 4.It can be easily observed that the input which is encoded is included in the output sequence also. These codes are generally called as systematic. The some other type of codes are generally called as non systematic codes. Generally the systematic codes are always recursive codes where as the non systematic codes are always non re cursive codes. The name convolutional encoder came for it because it carry out a convolution of the input stream with the encoders impulse responses.=Where x is an input sequence, is an output sequence from output j and which is the impulse response of the output j.The discrete lineaqr time invariant system is also called as a convolutional encoder. The own transfer function itself describes the output of an encoder. The transfer function is connected by the impulse response with z transform.+(Z)= +,(Z)=1+.The second order transfer functions are(z)=We Define m byWhere for any rational functional, we define it asWhere m is the uttermost polynomial degrees of the . And the constraint length is defined as .C.TRELLIS DIAGRAMA convolutional encoder is also referred as finite state machine. An encoder with n binary cells produce states. beneficial imagine the encoder which is shown in figure 1 has 1 in left memory cell(m0) in right one. We will refer such a stete as 10. By considering input bit the encoder at the next turn can convert either to 11 state or 01 state or 01 state. from the below figure we get rhe idea of decoding if the received sequence doesnot suit this graph then it errors can be obtained and we should select closest correct sequence. All transitions which are possible are shown as belowThe red line is the path of the trellis. The lines their indicate transitions where the dotted lines are 1 input and the proper lines rae 0 input. The path here represents an actual encoded sequence. One of the path is shown in red colour as an example. The figure provides an idea about the decoding.D.FREE DISTANCE AND THE ERROR DISTANCEThe minimum distance between the different encoded sequences is called as the liberal distance. The number of errors that can be corrected by code is given as correcting capability. It is given as correcting capability. It is given as,F. DECODING CONVOLUTIONAL CODESDecoding the convolutional codes can be done by several algorithms . thye vviterbi algorithm is universally used for relatively small values of K as it is highly parallelizable and it also provides maximum likelihood performance. By using SIMD instruction set the viterbi decoders are easy to implement the software on cpu and also in VLSI big(a)ware.Fano algorithm is the best one among the several sequential decoding algorithms by which the longer constraint lenght codes re more practically decoded. Unlike viterbi decoding sequential decoding is not have maximum probability but there will be the slight incresase in the complexity with their constraint lengfht. Giving the chance to us the long , short constraint Length codes. In early 1970s these codes are used in the pionner program in Jupiter and Saturn but do it shorter, these viterbi decoded codes, often connecyed with large reed Solomon error correction codes which generally steeps the complete bit error rate curve and it also produces the very low residual undetected error rates. These bo th sequential and viterbi decoding algorithm gives the hard decisions..The codeword which is most likely formed by the bits.by using the soft output viterbi algorithm an appropriate authorisation measure can be added to each bit in use with the BCJR algorithm the maximum a posterior(MAP)Soft decisions can be obtained.G. POPULAR CONVOLUTIONAL CODESThe viterbi decoded convolutional codes is used for the voyagev program which has a constraint length of K=7, and also rate=1/2. The more powerful codes are produced by the longer constraint lengths but the viterbi algorithm complexity increases exponentially with respect to the constraint lengths , controlling these more powerful codes to bass space machines where the extra performance increases decoder complexity. Cassine probe, mar path finder, mars exploration rover to Saturn use of K=15 AND RATE OF 1/6, 2DB Bbetter than the simplex k=7, code at a rate of 256* in decoding complexity iss performed by this code.H. MERITSTo correct the e rrors in noisy channels these convolutional codes are a great deal used.These convolutional codes perform well on very bad conditions also.These convolutional codes are generally used often in air communications, mobile telephony and also in voice band modems.Convolutional codes generally gives good results in lower noise environments.Convolutional codes are also used in the voice band modems( v.,32, v.17, v.34) and also in the GSM mobile phones.I. DEMERITSEven though the convolutional encoder has the simplest procedure, decoding of it is very complex task.Convolutional codes cannot provide more protection against noise.J. APPLICATIONSConvolutional codes are used in the number of the applications which aims to achieve reliable data transfer, including radio, mobile communications, digital video communications, satellite communications.By using hard decision code, these codes are implemented in chemical chain( example reed Solomon).These are more efficient.(references tutorial on convolutional coding and decoding, the online text book IT, inference and leaving algorithms by david J.C.Mackay).3. PRINCIPLES OF TURBO CODESBy using the block code with convolutional code and also withthe longer constraint length and large block length it is theortically possible to procession the Shannon limit. But this approach became impractical due to the processing power needed to decode. By using the recursive coders and iterative soft decoders the turbo codes overcome this drawback. The main aim of the recursive coder is to make the convolutional codes with shorter constraint lengths which appears as a large constraint length of a block codes and iterative soft decoder efficiently improves the estimation of the obtained message. The below shown figure generally gives the encoder for slaying which describes a clasiical encoder which provides the general design of the turbo codes. The 3 bits of the subblock can be sended by using this encoder implementation. The m-bit block of payload data is in the first subblock. The n/2 parity bits for a well known permutation includes in the tertiary subblock which is once again computed with the RSC code. The total number of bits in the code are . Interleaver is a device which generally carry outb the permutation of the payload data. This turbo code encoder includes two RSC codes (identical) , , as shown in the figure. By using the concatenation scheme these are connected to each othere which is reffered as parallel concatenation. here(predicate) M acts as a memory register. The input bits are forced to appear in different sequences by the dealy line and interleaver. The input sequence appears at both outputs after completion of first iteration , , because of the encoders systematic nature.==DECODERThe serial connection of two elementary decoders is said to be a decoder. Generally decoder only operates on the lower speed( ) , thus it is planned for the encoder and is for encoder. The produces delays and also ho ld a soft decision. Where as produces delay respectively. The error bursts which are coming from the output are scattered by the interleaver whicvh is installed between between the two decoders. Here DI refers to demultiplexing and insertion module here it acts as a switch, it generally redirects input bits to once and to at another. It feeds both , inputs with padding bits( zeroes) in off state. While considering a memory less AWGN channel and assumption is made that at the iteration, the decoder receives a pair of random variables.=(-1)+,=2(-1)+ .Here is a bit from output of encoder and , are independent noise components consists of same produces a soft decision ie) = and pass it to the . generally referred as logarithm of likelihood ratio. The posterior probability (APP) of bit data is defined as which shows the probability of explaining a received bit as i. But where as produces a hard decision which is a decoded bit. Viterbi algorithm is unappropiate to calculate APP, it is not used in a BCJR algorithm is used but where as for viterbi algorithm is an appropriate one.MERITSUp to now, we know so many practical error correction methods but turbo codes and low density parity check codes come very close to approach the Shannon limit , which is the theoretical limit of maximum transfer reate of information over a noisy channel.Turbo codes increases data rate with out increase the transmitted power, or to transmit at a certain data rate they are used to decrease the amount of the power.It generally produces the best performance results pof the throughput and rotational latency.DEMERITSIts main disadavantage is the complexity of the relative high decoding.It also has relatively high latency, which is not suitable for some applications.It is also not verymuch useful for the satellites why because the latency is produced by the transmission distance due to the limited speed of the light.These algorithms are complex in nature.APPLICATIONS3G and 4G mobile telephony standards uses turbo codes widely. Examples HSPA and LTE .Used in media FLO and QUALCOMMTurbo codes are also useful in NASA missions such as mars re connaissance orbiter which is another option to RS- viterbi codes.IEEE802.16 generally uses the block turbo coding and CTC(WIMAX) a wireless metropolitan standard.Turbo codes are used in the designs of the digital communication systems and also in detection, demodulation.Turbo codes in the field of the error correcting codes represents a meaningful development.The cable transmission, short distance communications or data storage are the prospective applications of the turbo codes.(references Berrou, claude glavieux, alian , punva, near Shannon limit error- correcting, Berrou, claude, the ten years old turbo codes are entering in to the service, france).4. circumstantial COMPARISIONLINEAR BLOCK CODES AND CONVOLUTIONAL CODESConvolutional codes have a regular, natural, trellis structure which are useful in the implementation of the vite rbi algorithm. Where as the linear block codes have a natural but it has minimal trellis structure.In both convolutional codes and linear block codes, the estimation of viterbi algorithm can be done by number of trellis edge symbol per encoded bit.Generally convolutional codes does not provide more protection against noise than the linear block codes.Convolutional codes offers greater simplicity of implementation over a linear block codes in many cases.Convolutional codes may not have minimal trellis representation but where as the linear block codes has the minimal trellis structure.The main differences between the linear block codes and convolutional codes is that a system which is using the block codes will transmits the k data bits and then transmits the n-k reductant bits. Where as the system which uses the convolutionla codes produces n coded bits from k data bits and the codeword need not contain unaltered k data bits.The main difference between the convolutional encoder and the block encoder is that the block code related not only to the current , but also on the anterior (u) number of fixed information blocks ,, That is given as,=++..With out the memory a linear block code is merely a convolutional code.The theory of the linear block codes seems to be degenerate of the theory of the convolutional codes, the usual approach of the convolutional codes is to note the small and fixed values of n and k and also the varying values of .TURBO CODES V/S CONVOLUTIONAL CODESThere are some differences between the behaviour of turbo codes and the convolutional codes. As we know that the performance of the convolutional codes imcreases with the change magnitude constraint length( code complexity). This is not for the turbo codes where the best codes of the turbo codes have the small constraint lengths .With decreasing code rate the performance of the convolutional codes doesnot improve significantly. Where as for the turbo codes even for the lower coding rates it achieves a very significant coding gain.While seeing from the implementation point of view, soft output encoders and recursive decoders are ingrained in turbo code schemes, where as they are not that much essential in convolutional codes excluding the concatenated configuration.CriteriaConvolutional codesturbo codesLarger constraint lengthgoodbadLower coding rateindifferentgoodLarger free distancegoodindifferentSoft output decodersindifferntgoodRecursive encodersIndifferentgood twain the linear block codes and convolutional codes are most frequently used type codes in the practice of enginnering for the plan of the most useful codes.TURBO VS LINEAR BLOCK CODESThe turbo codes are the hybrid model of the both the linear block codes and the convolutional codes respectively.The linear block codes used in the turbo codes instead of the convolutional codes to form a turbo product code(TPC). Two hamming codes are concatenated serially with the absence of an interleaver in between them. T hey can perfoem well in low SNR and can be formed by any block code.The convolutional codes used in the turbo codes usually forms a parallel concatenated convolutional code(PCCC) which have the small constraint length. They are systematic.5. SIMULATION RESULTSThe guise results of linear block codes are as follows The dual adaptive importance sampling (DAIS) proficiency evaluates the word and bit error rates of linear block codes down to extreme low values.(reference Slin and D. Costello, error correction coding fundamentals and applications).T,he simulation of the convolutional codes are as followsSimulations with two recursive , equal convolutional codes with an interleaver whose length is 400 and k=50, this generally provides manufacturer interoperability, because this technique achieves the improvements which is over 3.5 dB respectively over the trellis codification modulation.(referenceConcatened trellis/reed Solomon coding in DMT systems- Kschischarg).The simulation results o f the turbo codes are as followsAt a rate of R=1/2 the simulation results of the turbo codes are obtained. Here length of the data block is 400 bits( assumption) and also a MAP decoder used in the simulation. From the results it is observed that for the first few iterations yield the most significant improvements in BER for any given value of .After that the results appear to converge on to a BER for each value of .( reference Andersen JD and zyablov vv, interleaver design for turbo coding).6. CONCLUSIONSo, hence in this research paper we have discussed the principles, applications, merits and demerits of the linear block codes, convolutional codes, turbo codes respectively. We also discussed the main differences between the block codes, convolutional codes, turbo codes and presented their respective simulation results. Personally, by doing this research i learned a lot on this coding theory and also came to know the importance of these codes in the present world. The future applica tions of these codes are to provide the substantial improvements in communication for the satellites.7. REFERNCES(1) code tables bounds on the parameters of various types of the codes.(2) q- ary code generator program.(3) error correcting codes (ECC) page(4) Wikipedia articles in corporating text from the federal standarad convolutional codes with a 1037c.(5) turbo decoding as an cause of pearls belief propogation algorithm,.(6) IEEE journal on selected areas in communication- Robert J Mackay, David J.C.(7) Turbo equalisation principles and new results an IEEE paper on communications.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.