Neutral Citation Number: [2021] EWHC 1639 (Pat)
Case No: HP-2019-000014
IN THE HIGH COURT OF JUSTICE
BUSINESS AND PROPERTY COURTS OF ENGLAND AND WALES
INTELLECTUAL PROPERTY
PATENTS COURT
Rolls Building
Fetter Lane
London, EC4A 1NL
16th June 2021
Before :
THE HONOURABLE MR JUSTICE MELLOR
- - - - - - - - - - - - - - - - - - - - -
Between :
|
(1) MITSUBISHI ELECTRIC CORPORATION (2) SISVEL INTERNATIONAL SA |
Claimants |
|
- and - |
|
|
(4) ONEPLUS TECHNOLOGY (SHENZHEN) CO., LTD (5) OPLUS MOBILETECH UK LIMITED (6) REFLECTION INVESTMENT B.V. (7) GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP, LTD (8) OPPO MOBILE UK LTD
(9) XIAOMI COMMUNICATIONS CO LTD (10) XIAOMI INC (11) XIAOMI TECHNOLOGY FRANCE SAS (12) XIAOMI TECHNOLOGY UK LIMITED |
Defendants |
- - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - -
Michael Tappin QC and Michael Conway (instructed by Bird & Bird LLP) for the Claimants
James Abrahams QC and Adam Gamsa (instructed by Taylor Wessing LLP) for the Fourth to Eighth Defendants and (instructed by Kirkland & Ellis International LLP) for the Ninth to Twelfth Defendants
Hearing dates: 19th, 22nd-26th, 29th-30th March 2021
- - - - - - - - - - - - - - - - - - - - -
Approved Judgment
I direct that pursuant to CPR PD 39A para 6.1 no official shorthand note shall be taken of this Judgment and that copies of this version as handed down may be treated as authentic.
COVID-19: This judgment was handed down remotely by circulation to the parties’ representatives by email. It will also be released for publication on BAILII and other websites. The date and time for hand-down is deemed to be 10am on Wednesday 16th June 2021.
.............................
THE HON MR JUSTICE MELLOR
Mr Justice Mellor:
1. This Judgment is structured as follows:
INTRODUCTION 2 The Expert Witnesses 7 The Skilled Person 12 Common General Knowledge 16 THE PATENTS 110 Construction/Scope of the Claims 143 'pilot symbol' 147 Analysis of the Defendants' arguments 156 'information indicating' 161 'transferring information indicating the [not] need' 162 'information based on pilot symbols provided by the base station' 167 ESSENTIALITY 171 Introduction and identification of the issues 171 Relevant Technical Background 196 Part 1 - Basic Concepts in LTE 196 Part 2 - Access to UL-SCH Resources 212 Part 3 - Signalling SR via PUCCH 216 Reference signals in LTE 234 Structure of a PUCCH format 1x signal 236 Code Domain Multiplexing of PUCCH formats 244 Configuration of sr-PUCCH 254 Part 4 - Allocation of UL-SCH resources to UEs 260 Interim conclusion on the SR procedure. 269 Part 5 - How PUCCH format 1x signals are constructed and transmitted. 270 Part 6 - The role of coherent modulation 275 Part 7 - The passages in TS 36.211 relied upon by the Defendants 279 The issues which remain to be resolved 289 First: the wrong sort of pilot 290 Second: no reference signals in the PUCCH format 1x signals 291 Analysis 293 PUCCH format 1 signal for signalling SR 301 PUCCH format 1a/1b signals for signalling SR 303 Third: the DMRS is sent when no resources are required. 321 Fourth: the DMRS is not sent at different powers 322 Fifth: what if the DMRS was sent alone, and no SR was sent? 323 Eighth: DMRS are provided by the base station but the d(0) symbols are not 324 Ninth: non-coherent modulation or 'on-off keying' 328 VALIDITY 334 KWON 334 What Kwon discloses 334 The issues raised on Kwon 344 The step (1) message 345 The step (1-2) pilots 346 The step (1) message sent on a 'dedicated request channel' 349 ADDED MATTER 360 (1) transferring information indicating no need of a wireless resource 362 (2) information 'based on' pilot symbols 364 Conclusion 367
2. This case is about the use of pilot or reference signals which are sent in telecommunications networks for various purposes. The two patents in issue, EP 2,254,259 (‘EP259’ or ‘259’) and EP 1,903,689 (‘EP689’ or ‘689’), claim a new use for pilot signals. 689 is the parent of 259 and they share the same title “Method and device for transferring signals representative of a pilot symbol pattern” and priority date of 22 September 2006. Their descriptions are essentially the same, but their claims differ. It is common ground that I need only consider the essentiality and validity of claim 1 of each patent. Both patents are concerned with the use of pilot signals to request uplink resources in a telecommunications network e.g. a mobile phone network. EP259 claim 1 relates to the use of pilot signals to request for uplink resources, whereas claim 1 of EP689 relates to the use of information based on pilot signals to indicate the need (or not) for uplink resources.
3. This technical trial followed immediately after the trial concerning EP(UK) 1,925,142 (‘EP142’) on which I gave judgment on 26th April 2021 as [2021] EWHC 1048 Pat The Claimants own or administer the licensing of certain patents, including 259 and 689 which are owned by the First Claimant, which form part of a group of patents which are called the MCP Pool. In these proceedings, the Claimants seek to make the Defendants, as manufacturers of 4G handsets, take a license to the patents in the MCP Pool. A trial to consider the relief claimed by the Claimants, which will include a determination of the FRAND terms for licensing the MCP Pool, is set to be heard in October 2021. The Defendants which remain fall into two groups, the OnePlus Defendants (4-8) and the Xiaomi Defendants (9-12). Fortunately, the Defendants have joined forces for these technical trials. At this trial, the Claimants’ case was argued by Michael Tappin QC and Michael Conway and the Defendants’ case by James Abrahams QC and Adam Gamsa. I am grateful to Counsel and their supporting teams for their assistance. As with the trial of EP142, this trial took place as a fully remote trial on MS Teams, using CaseLines for the electronic bundles. Generally, the technology worked well, although we experienced some slight interference at times when Mr Bishop (the expert called by the Defendants) was giving his evidence.
4. At a high level, the issues for my determination following the trial are:
i) Are the patents essential to the aspect of the LTE (4G) standard concerned with transmission of a scheduling request (SR) to request uplink resources?
ii) Are the patents invalid:
a) because they are anticipated by or obvious over the prior art known as Kwon?
b) in the case of the 689 patent, for added matter?
5. As usual, there is a good amount of technical detail which I need to explain in order to address these issues, and many acronyms to absorb. The technical background falls into two areas. First, there is the CGK as at 22 September 2006. As I directed at the PTR, the parties produced a very useful document in which they reached agreement as to the CGK, save for one relatively small point. Second, there is the technical detail required to understand the arguments on essentiality. In this case, this is far more complex than the subject-matter of the CGK, largely because it concerns the details of signalling in LTE. Although ultimately there was little dispute as to this, in future where the infringement/essentiality/equivalents case involves complex technology, consideration should be given at the PTR stage to a direction that the parties serve a document or documents setting out not only (i) Agreed CGK, with a list of CGK issues in dispute but also (ii) Agreed technical background for the infringement/essentiality/equivalents case, again with a list of technical issues in dispute. In each case the issues in dispute should be cross-referenced to the relevant paragraphs in the experts’ reports. Documents of this type will be of great assistance to any trial judge.
6. The patents were written at a time when 3G technology was mature and the mobile phone industry had started initial steps towards developing new 4G standards.
7. Mr Nicholas Anderson was the expert witness called by the Claimants, with Mr Craig Bishop called by the Defendants.
8. Mr Anderson was subjected to cross-examination which was, at times, unnecessarily aggressive. At one point, counsel even responded to one of his answers by saying ‘I cannot believe that really is your evidence’. It is important for counsel to understand that what he or she thinks or believes is irrelevant. In their closing argument, the Defendants made some trenchant criticisms of Mr Anderson which culminated in the submission that ‘he saw the witness box as a platform to fight his corner.’
9. I reject all of the Defendants’ criticisms of Mr Anderson. In my view, Mr Anderson was precise and accurate, both in his written evidence and in cross-examination. I point out this is not a criticism at all, bearing in mind the technology involved and the issues in this case. He was careful not to overstate matters. He explained the technology very well indeed.
10. I also point out that ultimately I was unable to detect any serious disagreement between the experts on technical matters. This serves to emphasise the inappropriateness of the Defendants’ attack on Mr Anderson, because the Defendants’ own expert ultimately agreed with Mr Anderson.
11. Mr Bishop’s written evidence was more guarded than that of Mr Anderson. Both his written and oral evidence suffered somewhat from his adoption of the Defendants’ key arguments on essentiality which were founded on taking some statements in the LTE standard too literally and somewhat out of context. Whilst he attempted in cross-examination to stick to the party line, as I relate later (see paragraph 315 below), this ended up with him tying himself in knots. Ultimately, however, he very largely agreed with the points put to him and where he qualified his answers, his qualifications did not impact on the matters I have to decide. No doubt for those reasons the Claimants did not seek to criticise Mr Bishop. Overall, I am grateful to both experts for their assistance.
12. For convenience I will refer to the Skilled Person, whilst recognising that in reality this notional person might well have comprised a team of engineers. The experts were largely agreed as to the characteristics of the skilled person. He or she would be a systems engineer or architect working on radio access layers in the wireless communications industry, with an interest in improving the capabilities or efficiency of the radio access network. They would be likely to have a particular interest in methods for improving the allocation of resources in the system. They would require a detailed knowledge of the physical (‘PHY’) and medium access control (‘MAC’) layers and their components.
13. They would have an undergraduate degree or similar qualification in electronic engineering, maths, physics or computer science along with practical experience in the design, simulation or implementation of the MAC and PHY layers of wireless communications products.
14. Where the experts differed was as to the Skilled Person/Team’s knowledge of the detail of discussions at the meetings of the 3GPP RAN Working Groups 1 and 2. RAN WG1 was responsible for the specification of the physical layer of the radio interface for EU, UTRAN and E-UTRAN (see below). RAN WG2 was responsible for the radio interface architecture and protocols (MAC, RLC, PDCP), the specification of the RRC protocol, the strategies of Radio Resource Management and the services provided by the physical layer to the upper layers. Mr Bishop acknowledged that not all members of his team would attend all RAN WG1&2 meetings but maintained they would have an in-depth understanding of their specialist area and would have ready access to and knowledge of 3GPP deliverables as well as the meeting reports/contributions from colleagues who did attend.
15. For his part, Mr Anderson did not expect the skilled person to be familiar with the minutes of meetings of the working groups or of technical documents submitted to such meetings which had not been incorporated into approved specifications. He noted, correctly, that there is nothing in the patents to indicate they are focussed on 3GPP systems. He indicated that those with a practical interest in the patents would not have been limited to those engineers who regularly or even occasionally attended the meetings of the 3GPP RAN WG1&2. Although this mini-dispute did not appear to have any impact on anything I have to decide, I incline to Mr Anderson’s view because I do not consider the ordinary unimaginative skilled person needed to keep up to date with all the discussions being conducted in the relevant groups at a particular standards setting organisation (even one as influential as 3GPP) to carry out his or her job. They would be aware of the effort to develop a long term evolution of UMTS and would be aware that 3GPP was undertaking an exercise to study and evaluate candidate techniques. They could wait to find out what that organisation had adopted in its approved specifications, although, given a specific need to do so, they were well able to find meeting reports and contributions from the relevant working groups.
16. What follows in this section is very largely based on the agreed CGK document, with some edits of my own. Everything in this section formed part of the Common General Knowledge.
17. Mobile networks are based on cellular networks. In a cellular network, mobile devices (often referred to as User Equipment (“UE”)) communicate with the network via a fixed transceiver (often referred to as a base station (“BS”)) that operates over a particular area called a cell.
18. Cellular networks usually comprise a Core Network (“CN”) component, and a Radio Access Network (“RAN”) component. The CN provides network and connection management functions in addition to onward connectivity to other networks, such as the internet. The RAN comprises a number of base stations which provide connectivity between UEs and the Core Network.
19. Wireless communication may be circuit switched (“CS”) or packet switched (“PS”). This case only concerns PS. In PS systems, data is sent in units referred to as ‘packets’, which contain two parts: (i) a header, which primarily identifies the addresses of the sending entity and of the intended recipient entity; and (ii) the payload. The packets are then sent on a ‘hop-by-hop’ basis from one entity to the next based on this header information until they reach their destination.
20. Within a cell, UEs and the network communicate on radio “resources”, which may be defined in a number of different ways, including in one or more of time, frequency and code.
21. The Open Systems Interconnection (“OSI”) model is a common way of describing different conceptual parts of communication networks developed by the International Organisation for Standardisation (“ISO”). It describes seven layers that are typically present in a communications network in some form, of which only layers 1 - 3 have any bearing on this case.
22. A group of layers that cooperate to achieve an overall communication of information is referred to as a protocol “stack”. When transmitting information, each layer receives data packets from the layer above, processes them (which may involve for example adding new or different header information), and then passes them to the layer below. Data packets received from a higher layer are referred to as ‘service data units’ (“SDUs”) whilst data packets passed to the lower layer, following processing, are referred to as ‘protocol data units’ (“PDUs”).
23. Layer 1 is the Physical Layer (“PHY”). It provides the means for physically transferring bits over the air interface. It is responsible for encoding and modulating data in readiness for its transmission over the air interface (see below).
24. Layer 2 is referred to as the Data Link layer. It ensures there is a reliable flow of information between the UE and the Core Network and may comprise several components, including the Medium Access Control (“MAC”) and Radio Link Control (“RLC”). Functions of the RLC may include buffering of data awaiting transmission for separate data flows referred to as ‘logical channels’. The MAC’s functions may include multiplexing logical channels provided by the RLC onto transport channels at the PHY. In doing so, the MAC layer may also be responsible for managing the relative prioritization of the logical channels as part of the multiplexing process. The MAC may also be responsible for scheduling. It may also provide fast retransmissions via a process known as Hybrid Automatic Repeat Request (“HARQ”) and may assign or determine a suitable transport format to use for an upcoming transmission.
25. Layer 3 is the network layer. In addition to routing data in the Radio Access Network, it manages the setting up, modification and release of radio connections between the UE and the Core Network and configures the protocols to be used at MAC and RLC level in respect of different services.
26. In a wireless communication system, information in a data signal cannot easily be transmitted and received in its native form. Therefore, it is converted into a waveform that is, in turn, used to multiply (or ‘modulate’) a ‘carrier’ wave (at the desired carrier frequency). This has the effect of modifying the amplitude, frequency and/or phase of the carrier wave depending on the type of modulation that is used. These modifications are detected at the receiver in order to recover the information of the data signal.
27. A variety of modulation approaches are possible. A popular approach is phase modulation.
28. In phase modulation, the phase (i.e. the point within a 360 degree sinusoidal cycle) of the carrier wave is modified by the information that is to be carried. The modification applies within a time duration called a ‘symbol period’. In a simple example, the phase modification may be 0 degrees throughout the symbol period to represent a binary “1”, and may be 180 degrees throughout the symbol period to represent a binary “0”. The waveform that modifies (or modulates) the carrier wave is then a square wave assuming values of +1 throughout a symbol period (to effect a 0 degree, or no, modification) and -1 (to effect a 180 degree modification). An example of phase modulation in which a series of 3 bits (represented by the sequence “1, 0, 1”) are to be conveyed is shown in Figure 1 below.
Figure 1 - Example of phase modulation of a carrier wave
29. Phase modulation forms the basis of commonly used modulation schemes such as Binary Phase Shift Keying (“BPSK”) and Quadrature Phase Shift Keying (“QPSK”). In BPSK one bit is encoded per symbol by modulating the carrier wave through 180 degrees. In QPSK, two bits are encoded per symbol by modulating the carrier wave through multiples of 90 degrees, as shown below.
Figure 2 - Example of BPSK
Figure 3 - Example of QPSK
30. The transmitted waveforms are usually contained within a particular range of frequencies. The width of the frequency range is known as the ‘bandwidth’ of the signal, and the centre of the range is known as the ‘carrier frequency’. The use of different carrier frequencies for different signals allows these to be communicated at the same time without causing substantial interference to one another.
31. Radio propagation in a land mobile channel is characterised by reflections, diffractions and attenuation of energy caused by obstacles in the environment such as hills or buildings. This results in the signal travelling from transmitter to receiver by different paths which may have different lengths and/or may affect the strength of the signal. This is referred to as multipath propagation.
32. As a result of multipath propagation, the receiver may receive multiple copies or ‘echoes’ of a signal which may have a different delay or amplitude. These copies may interfere with one another. Constructive interference is when the peaks of two signals coincide, amplifying the signal at that point. Destructive interference is when the trough of one signal coincides with the peak of another, cancelling it out. Over time this interference can cause large variations in amplitude known as ‘fading’.
33. In some cases, a signal travelling by a longer path is delayed such that the receiver starts to receive one symbol from a signal on a shorter path while still receiving the previous symbol on the longer path. The two symbols therefore overlap at the receiver, causing a problem known as inter-symbol interference (“ISI”).
34. There is also the possibility of phase shift, for example when a transmitting mobile is moving. If the mobile moves the distance of half a wavelength of the carrier signal, then the phase of that signal changes by 180°. Given an operating frequency of 2.5GHz, half a wavelength is only 6cm. This phase change needs to be taken into account during demodulation if phase is used to encode information in the transmission.
35. The overall effect of the environment on a signal is often referred to as a ‘propagation channel’. The propagation channel may be characterised by its ‘impulse response’. This is often denoted as a function h(t) of time ‘t’. The effect that such a time domain impulse response might have on individual frequencies ‘f’ is referred to as the channel frequency response H(f). H(f) is entirely defined by the corresponding time domain impulse response h(t), and therefore these are just two different ways to represent the same thing, i.e. the response of the propagation channel which characterises the relationship between the transmitted signal and the received signal. This channel response is determined simply by the set of transmission paths that exist within the environment between the transmitter and receiver. An example of the impulse response of a propagation channel as a function of time and frequency is shown below.
Figure 4 - Impulse response of a propagation channel
36. The impulse response of the propagation channel is determined by the position of the transmitter and receiver within the environment and may also change as other objects within the environment move. At carrier frequencies that are typical of many practical wireless systems (including for mobile phones), the impulse response may change appreciably within a few milliseconds.
37. The received signal may also be affected by other (potentially interfering) signals originating from other transmitters, in addition to so-called thermal noise.
38. One measure of a received signal’s quality is known as the Signal to Noise-plus-Interference Ratio (“SNIR”, also designated as SINR on occasion), which is a ratio of the power of the wanted signal to the power of interfering signals and noise.
39. Unless compensated for, the effects of the propagation channel (along with any interference and noise present in the received signal), degrade the ability of the receiver to correctly determine the information within the transmitted signal.
40. In wireless systems, these effects may be mitigated by employing certain techniques at the transmitter, at the receiver, or (more commonly) a combination of both.
41. The function of the transmitter is to convert data for transmission into signals or waveforms that are then radiated by the antenna. Typical processes carried out on the data at the transmitter include Forward Error Correction (“FEC”); modulation; and up-conversion and amplification.
42. Forward Error Correction involves encoding the message to expand the number of bits that are transmitted, such that an original data message of length ‘n’ bits is represented by a longer series of ‘n+k’ bits. This enhances robustness as even if the receiver does not correctly receive all the bits transmitted, it is able to use information obtained from the other bits to decode the original message. The ratio of ‘n’ data bits to ‘n+k’ code bits is called the ‘code rate’.
43. Modulation is described above. A variety of modulation schemes may be used which result in a different number of bits (‘m’) being encoded per modulation symbol. BPSK and QPSK (see above) are two examples where m=1 and m=2 respectively. Another is 16 Quadrature Amplitude (“16 QAM”), in which both phase and amplitude are modulated to encode a higher number of bits (m=4).
44. Up-conversion and amplification refers to the process following modulation, where the waveform is ‘upconverted’ (or translated in frequency) such that it occupies the desired carrier frequency and is amplified to the required power level prior to transmission via the antenna.
45. In general, modulation schemes with a lower value of m give greater robustness against adverse SNIR, while higher values of m result in higher data rates. Lower FEC code rates also give more robustness and tend to be used where the SNIR is low. Systems may use Adaptive Modulation and Coding (“AMC”), which involves adjusting the modulation scheme and FEC code rate according to the current SNIR. The transmitted power of the signal may also be adjusted to attempt to maintain a constant SNIR at the receiver.
46. Typical processing functions carried out at the receiver include: down-conversion and filtering; channel estimation and equalisation; demodulation; and FEC decoding.
47. Down-conversion is the reverse process of up-conversion, referred to above. Filtering may be performed to remove (or attenuate) signals with frequencies outside the carrier frequency range, which could otherwise interfere with the desired signal.
48. Equalisation is a process by which the receiver may reverse or ‘undo’ the effects of the propagation channel on the transmitted signal, such as ISI, and restore the correct phase and amplitude for each received symbol. To do so, the receiver must obtain an estimate of the propagation channel impulse response (h(t)) or its frequency domain equivalent (H(f)). This is called channel estimation. A common method of performing channel estimation is to correlate the received signal with a known reference signal (see below).
49. Once the correct symbols have been obtained following equalisation, demodulation refers to the process of determining the underlying sequence of bits that are carried by each modulation symbol.
50. FEC decoding is the process of decoding the receiver’s estimate of the FEC-encoded bit sequence to determine the underlying data.
52. As explained above, a known reference signal may be used to perform channel estimation. Because the properties of the reference signal are known to the receiver, it has knowledge of the form of the signal as sent and as received, which can be used to determine the channel conditions.
Figure 5 - pilot assisted channel estimation
54. Once derived, the channel estimate may then be used by the equaliser, which attempts to restore the modulation symbols so they are no longer distorted. Figure 11 illustrates this basic principle for an example in which QPSK modulation is used.
Figure 6 - Restoration of modulation symbols by an equaliser
55. The use of a reference signal by a receiver to compensate for the phase shift introduced to the data symbols in this way is also referred to as “coherent demodulation”.
56. Reference signals may be used to determine the overall channel quality, for example by measuring the received power of the reference signal, or calculating the SNIR of the channel.
57. This may be done by post-processing the output of the channel estimator to obtain the overall energy of the signal.
58. Channel quality estimation may be used as an input to various features, such as AMC, power control, and channel-dependent scheduling.
59. As noted above, AMC refers to adapting the modulation and coding scheme used according to the existing channel conditions. In systems that support AMC, the network scheduler may use knowledge of radio channel conditions (such as the achievable SNIR) in order to determine an appropriate modulation and coding scheme. This may be done as part of dynamic scheduling (see below).
60. In systems that use dynamic scheduling (see below), the measured channel quality may be used as an input to the scheduler when allocating resources between users. For example, the system may allocate resources according to the point in time or particular frequency at which a given UE is experiencing advantageous channel conditions (either as compared with other UEs or compared with the average conditions for the UE in question).
61. However, channel dependent scheduling is not always used by networks. There were various approaches to uplink resource allocation which did not involve channel dependent scheduling (for example, link adaptation based on AMC, or non-adaptive / fixed-rate transmission (with or without power control)).
62. When channel dependent scheduling is used, the base station does not necessarily schedule mobiles with the best uplink channel quality, because such a scheme could result in UEs experiencing poorer channel quality being prevented from sending data for prolonged periods. Commonly, channel dependent scheduling involved allocating resources to UEs which are (at an instant in time) experiencing a channel quality that is higher than the average for that UE. This is sometimes referred to as multi-user diversity.
63. The system may use measured channel quality as an input in power control. For example, if a desired SNIR or Signal to Interference Ratio (“SIR”) is established at a base station for received uplink transmissions from mobiles, the SIR of known reference symbols can be measured and compared with the target SIR. This can then be used by the base station to adjust the transmit powers of mobiles.
66. As an example of this, with which the skilled person would have been familiar, was the Scheduling Information message in UMTS/HSPA (see paragraph 100 below). That message would be sent on uplink within a MAC-e PDU (either alone or multiplexed with other data) on the enhanced dedicated channel (E-DCH). The E-DCH transmission was also code multiplexed with a reference signal on the uplink Dedicated Physical Control CHannel (‘DPCCH’) to allow coherent demodulation of the Scheduling Information data sent on the E-DCH, which contained information on buffer occupancy and power headroom.
67. As is apparent from the preceding paragraphs, the experts agreed that reference signals were used for various purposes. A reference signal could not necessarily be used for all of those purposes. However, the skilled person was also aware that a given reference signal could be used for different purposes in different contexts. For example, a given reference signal could be used to support coherent demodulation in one context and for channel quality determination in another context.
68. Mr Bishop provided two examples of this, with which Mr Anderson agreed, (subject to certain clarifications on the details which do not matter for present purposes):
i) First, he explained that in GSM/GPRS, 'training sequence' bits were used both for synchronisation and for channel equalisation.
ii) Second, he explained that in UMTS/HSUPA, pilot bits on the uplink DPCCH were used for multiple purposes.
69. Duplexing refers to dividing the radio resource between the “uplink” (transmissions from UEs to the network), and “downlink” (transmissions from the network to UEs).
70. Typically, mobile networks use either Frequency Division Duplexing (“FDD”) (using one carrier frequency for the uplink and another for the downlink) or Time Division Duplexing (“TDD”) (using different time periods of the same carrier frequency for the uplink and the downlink).
71. Multiple access allows two or more mobile stations connected to the network to share its capacity (in either the uplink or downlink directions) for transmission and reception of data. There are a variety of multiple access methods which allow the radio resource to be divided between different users so that their signals can be multiplexed (combined) over a shared medium and distinguished from each other at the receiver. Aspects of these methods may also be used in combination.
72. Frequency Division Multiple Access (“FDMA”) divides the available carrier frequencies between different UEs so that each has its own designated uplink and downlink carrier frequency. This was typically used in first generation analogue systems.
73. Time Division Multiple Access (“TDMA”) divides a given carrier frequency into time slots or “sub-frames” and allocates one or more time slots to a given UE. This is the scheme used in GSM.
74. Code Division Multiple Access (“CDMA”) allows transmissions from different users to be sent at the same time on the same frequency, by applying a spreading code that spreads the signal across a wider bandwidth. A form of CDMA called Wideband CDMA (“WCDMA”) is used in UMTS (see below). In WCDMA, channelisation codes (or Orthogonal Variable Spreading Factor “OVSF” codes) are used to spread the signal. A scrambling code is then applied to the spread signal. The combination of the channelisation/OVSF codes and the applied scrambling code can be arranged to be unique to each user. Thus transmissions to and from different UEs can be distinguished from each other at the receiver by “de-spreading” using the particular combination of the channelisation code and scrambling code. However, transmissions in the uplink are not orthogonal.
75. Orthogonal Frequency Division Multiple Access (“OFDMA”) is a form of FDMA where each UE is assigned (within a carrier) a number of subcarriers. Subcarriers are overlapping but orthogonal subdivisions of the available frequency band. During a unit of time referred to as an ‘OFDM symbol’, multiple modulation symbols (i.e. a block of modulation symbols) are transmitted in parallel, with each on a different sub-carrier.
76. Single Carrier FDMA (“SC-FDMA”) is closely related to OFDMA. Like OFDMA, each UE is given a unique allocation of orthogonal subcarriers on which to transmit a block of modulation symbols (during a unit of time referred to as an ‘SC-FDMA’ symbol). However, in SC-FDMA an additional step called a Discrete Fourier Transform (DFT) is applied. This spreads each modulation symbol across a group of subcarriers, so instead of each modulation symbol being transmitted on a single sub-carrier (i.e. a 1-to-1 mapping of modulation symbols to sub-carriers), each modulation symbol is conveyed by a group of sub-carriers, which may vary according to the allocation given to each UE.
77. OFDMA and SC-FDMA are used in the downlink and uplink, respectively, for LTE. The common general knowledge on LTE is addressed below.
78. In telecommunications the term “orthogonal” has come to refer to signals that can be separated and distinguished from one another (e.g. when received by a receiver) in such a way that they do not interfere with one another. Orthogonality between signals can be achieved by separation in the time domain, frequency domain, or code domain. By means of an example of code-domain orthogonality, at zero offset (where the signals are aligned), the dot-product of the sequences [+1 +1 +1 +1] and [+1 +1 -1 -1] is equal to (1×1) + (1×1) + (1×-1) + (1×-1) = 0.
79. In all of the multiple access systems referred to above, it is important that the signals from different users can be distinguished from one another. This may be achieved by allocating resources that are orthogonal, but the fact that signals can be distinguished from one another does not mean they are orthogonal. In telecommunications, signals that are orthogonal do not interfere with each other.
80. Multiple access schemes require a mechanism for sharing resources among different users. Sharing mechanisms may be centralised or decentralised. In a decentralised approach, users contend for radio resources using an access protocol. In a centralised scheme, access to resources is controlled by a central network entity - commonly the base station. This is referred to as scheduling.
81. Resources may be assigned to a UE on a long-term basis (often referred to as a dedicated channel) or short-term basis.
82. Since transmission resources are scarce, any scheduling scheme needs to be designed to maximise the efficiency with which uplink transmission resources are assigned to and used by mobile stations so as to maximise the capacity of the cell in terms of data throughput while taking into account the data rate demands of the users of the system.
83. Dynamic scheduling is the process of providing short-term allocations of resources to users, typically lasting one to a few milliseconds.
84. In an uplink dynamic scheduling scheme, the network allocates resource to users as and when they are needed, based on their current traffic (i.e. the presence and/or arrival of data in the UE’s transmission buffer). The base station can dynamically allocate uplink resources to each user through the use of fast downlink control signalling, that carries information such as the location and quantity of the resource assigned at a particular instant in time. This allocation is generally referred to as a ‘grant’.
85. In systems that also support AMC, the network may use the fast downlink control signalling not only to dynamically allocate shared uplink channel resources, but also to indicate the modulation and coding scheme that is to be applied for the upcoming data transmission on those resources. Fast downlink signalling may also be used to control the transmission power of UEs to limit interference within the cell.
86. As referred to above, dynamic scheduling schemes may employ channel dependent scheduling in determining when to grant resources and what resources to grant to different users.
87. To operate dynamic scheduling effectively, the scheduler must know whether each user has traffic pending. In the uplink, the data pending for each UE is stored in its transmission buffers. This requires there to be means for each UE to communicate its buffer status to the network.
88. Systems using dynamic scheduling may therefore employ a buffer reporting mechanism. The buffer report may contain information such as the total number of bytes awaiting transmission, or the number of bytes in each of several queues that may be associated with different transmission priorities.
89. Standardisation enables interoperability of telecommunications networks. There have been three principal different digital wireless telecommunications standards, spanning 2nd, 3rd and 4th generation (2G, 3G and 4G) technologies. The first of these was originally developed by an industry-wide collaborative group known as ETSI (European Telecommunications Standards Institute). Another larger group called the Third Generation Partnership Project (3GPP), which includes ETSI, developed the 3rd and 4th generation technologies.
90. Standards are embodied in technical specifications (TS) which set out the functionality that is to be implemented by equipment in order to comply with the standard. There are many different TSs covering different aspects of functionality required by the standard. In 3GPP, TSs are produced by technical specification groups (TSGs) that each have responsibility for a different overall technical area. Within each TSG there are several smaller Working Groups that are each focused on different specific aspects of the technology for which the TSG is responsible. A standard may have several principal “releases” through which new features are introduced. Within each release, different versions of each TS may be produced, for example to correct errors or to improve the clarity of the specification. At the point where the overall functionality to be included into a particular release has been defined, the release is said to be functionally “frozen”: after that time no new features may be added (though corrections and clarifications may still be made by means of new TS versions within the same release).
GSM/GPRS
91. GSM is a second generation (2G) wireless telecommunications standard. It was initially developed to support circuit switched services. The General Packet Radio Services (GPRS) architecture was added later to support packet switched services.
92. Multiple access in GSM/GPRS is achieved through a combination of TDMA and FDMA.
93. Further details of GSM/GPRS were given by Mr Bishop, and were agreed to be CGK. The only point which it is necessary to bring out is the point I have already adverted to above, that in GSM and GPRS, known reference bits (called "training sequence" bits) were included in transmission bursts for synchronisation and channel equalization purposes.
UMTS/WCDMA
94. The Universal Mobile Telecommunications System (UMTS) is a third generation, or 3G, wireless telecommunications standard developed by 3GPP. The first UMTS release was Release 99, frozen in March 2000. Subsequently Release 5 introduced High-Speed Downlink Packet Access (“HSDPA”) and Release 6 introduced High-Speed Uplink Packet Access (“HSUPA”) - together “HSPA”.
95. UMTS supports both FDD and TDD modes and both modes employ CDMA. The FDD mode (often referred to as WCDMA - see above), is deployed more widely than the TDD mode.
96. In WCDMA, data from each user is spread across a wider bandwidth by multiplying each modulation symbol by a spreading sequence called an Orthogonal Variable Spreading Factor (“OVSF”) code or “channelisation code”, as described above.
97. Because transmissions from different UEs on the uplink are not orthogonal and may interfere with each other, the interference must be managed. In order to do so, the Node B controls the power of the transmissions from each UE.
HSPA
98. HSPA introduced several features designed to enable higher data rates. HSDPA used higher order modulation schemes and dynamic scheduling, with adaptive modulation and coding schemes, to increase the theoretical downlink data rate available to mobiles to up to 14.4 Mbps for short periods. It also introduced a shorter 2ms time interval over which resources could be allocated, called a Transmission Time Interval (“TTI”). HSUPA introduced the Enhanced Uplink Dedicated Channel (“E-DCH”) and gave control over resource allocation to the Node B. This increased efficiency by allowing dynamic scheduling by the Node B.
99. The Node B dynamically schedules uplink transmissions by providing UEs with a grant that determines the maximum transmission power that the UE may use for scheduled transmissions, which may be adjusted on a TTI-by-TTI basis.
102. In UMTS a Common Pilot Channel (“CPICH”) was defined as a downlink reference. Known pilot bits were also included in the downlink and uplink Dedicated Physical Control Channels (“DPCCHs”).
103. Reference signals are used in UMTS/HSUPA for several purposes including coherent demodulation, SIR measurement and power control.
104. To ensure 3GPP radio access technology would remain competitive over a long period, it was necessary to consider a long-term evolution of the 3GPP system architecture, optimised for packet data, and potentially involving a new radio-access technology. Aspects that were identified to improve packet data performance included reduced latency, enabling more rapid access to necessary resources through flexible and traffic dependent scheduling, as well as the support of higher data rates.
105. The parties agree that the following happened as a matter of fact. The first high level study, which introduced the term “3G Long Term Evolution” was published in September 2003. A workshop took place at the end of 2004 which considered OFDM as a technology for the downlink. Subsequently, feasibility studies were begun and largely completed by June 2006. This resulted in the following technical reports (“TR”), each of which had been published and was available to the public to download at the priority date:
a. TR 25.813 v7.0.0 on E-UTRAN Protocol Architecture;
b. TR 25.814 v7.0.0 Physical Layer Aspects for Evolved UTRA;
c. TR 25.912 v7.0.0 Feasibility Study for Evolved UTRA and UTRAN.
106. As I touched on earlier, there remained a mini-dispute as to the extent to which the content of these TRs was CGK. The Claimants’ case is that it was CGK that a TR had been published which set out the core functionality proposed for the radio aspects of the proposed system (this was TR 25.912). The Claimants also accept that TR 25.912 (and the other two TRs) were available to be consulted for the current state of LTE development. The Claimants do not accept that this means that the contents of the three TRs was CGK.
107. The Defendants’ case is that it was CGK at the priority date that each of the three TRs existed and, to the extent that the details contained therein were not already known to the skilled person without consulting the document concerned, the documents themselves were available to be consulted for the current state of LTE development.
108. This dispute has no impact on any of the issues I have to decide, but I incline to the view that these TRs would have been consulted by the skilled person, given any reason to do so. In other words it would have been obvious for the skilled person to obtain the information, given any need to do so. This does not make the contents of these TRs CGK - see KCI v Smith & Nephew [2012] EWHC 1487 (Pat), [2010] FSR 31, Arnold J at [112].
109. Although logically I should consider the disclosure of the prior art, Kwon, at this point, the Defendants’ arguments on Kwon are really only comprehensible once one has an understanding of the arguments on Essentiality. For that reason, I will consider Kwon after Essentiality.
110. The patents have a priority date of 22 September 2006, 689 being the parent and 259 a divisional application. The text of the two patents is largely and materially identical, the differences lying in the claims and in the corresponding consistory clauses. The parties were in agreement that it is best to look primarily at EP689 and I will do the same, noting the differences in EP259.
111. Although three issues of construction arise, by far the most important is concerned with the proper interpretation of the term ‘pilot symbol(s)’. The Defendants argue that in the context of these patents, the term ‘pilot symbol’ in the claims means a channel quality pilot, i.e. a pilot symbol which is used, at least, for channel quality determination. The argument depends on close analysis of certain early paragraphs in the specifications, so it is necessary to set them out in detail before I come to address construction.
112. As I mentioned, both patents are entitled ‘Method and device for transferring signals representative of a pilot symbol pattern’, wording which also features in [0001] in the context of a telecommunications system and the transfer of such signals to a telecommunications device.
113. Paragraph [0002] explains that in some telecommunications networks, the access by mobile terminals to the resources of the network is decided by the base station.
114. Paragraphs [0003] and [0004] describe a prior art uplink scheduling scheme, albeit one which neither expert recognised. In that scheme, when a mobile needs to transmit data, it requests the base station to allocate it a pilot symbol pattern. The base station allocates to each requesting mobile a pilot symbol pattern and a pilot allocation time (e.g. 20ms). Throughout that period, each mobile must periodically (e.g. every ms) transfer to the base station signals representative of the pilot symbol pattern it has been allocated. The base station determines the channel conditions which exist between itself and each mobile, using the signals received, and schedules mobiles for uplink transmission based on those channel conditions.
115. The experts were agreed that the reference to ‘signals representative of the pilot symbol pattern’ allocated to the mobile would be understood by the skilled person to refer to signals derived from the pilot symbol pattern allocated by the base station to that mobile and he or she would understand them to be pilot signals. Generally, the term ‘pilot symbol pattern’ is used throughout the specification but is not used in the claims or in the consistory clauses, which use the term ‘pilot symbol’(s) instead. The experts used the shorthand ‘pilot signals’ and I will do the same.
116. In [0005]-[0007], two issues with the prior art scheme are identified. First, resources may be allocated to the mobile at times when it has no data to transfer, so the resources of the system are used inefficiently. Secondly, the mobile wastes battery power ("electric power resources") by sending pilot signals at times when it has no data to transfer. The patents explain at [0008] that one possible solution is to shorten the pilot allocation time, but that leads to more messaging between the mobiles and the base station.
117. In [0009] there is a brief recognition of a prior art patent in which pilot signals are modulated by power control information. In [0010], it is noted that in the prior art, pilot symbol patterns ‘are transferred for the only purpose of channel conditions determination’, a phrase in which the Defendants are keen to stress the word ‘only’.
118. Then [0011] contains a statement of the aim of the invention. In this and the further paragraphs quoted from the Patent, I have underlined the expressions on which the Defendants place particular emphasis:
"The aim of the invention is therefore to propose methods and devices which allow an improvement of the above mentioned technique and which enable to use signals representative of a pilot symbol pattern for another purpose than channel conditions determination, in order to improve the use of the resources of the telecommunication system and to better use of the electric power resources."
119. Then, after two consistory clauses, EP689 continues at [0014]-[0018]
[0014] Thus, the signals representative of the pilot symbol pattern can be used for another purpose than channel conditions determination. As the transmission power of the signals representatives of the pilot symbol pattern is adjusted to information associated to data to be transferred to the first telecommunication device [i.e. the base station], the first telecommunication device is able to execute further process based on the signals representative of the pilot symbols patterns.
[0015] Furthermore, as the signals representative of pilot symbol patterns convey another information, the resources of the telecommunication network are used efficiently.
[0016] (absent from EP259) [According to a particular feature, the information associated to data to be transferred is the existence or not of data to be transferred and if no data to be transferred exist, the transmission power is adjusted to null value.]
[0017] Thus, the first telecommunication device is precisely informed if data has to be transferred or not by the second telecommunication device [i.e. the mobile].
[0018] Furthermore, if the second telecommunication device has no more data to transfer, it reduces its electric power resources consumption."
120. EP689 continues with further consistory clauses which I have considered but need not relate because (a) it is common ground that I need only consider claim 1 and (b) they shed no light on the key issue of the construction of ‘pilot symbol(s)’. Paragraph [0032] introduces the 6 figures and the remainder of the specification from [0033] to [0126] describes example embodiments by reference to those figures. The specification concludes in a familiar manner with the reminder in [0127] that ‘Naturally, many modifications can be made to the embodiments of the invention described above without departing from the scope of the present invention.’
121. The specification of EP259 is identical in [0001]-[0011]. Then, of course, the consistory clauses and the alleged advantages of other embodiments differ until the two patents resume identical text from the introduction to the figures, which occurs in [0025] in EP259, to the end of the specification, where [0120] in EP259 has the identical conclusion as [0127] in EP689.
122. Of the figures, figures 1-3 show the basic elements of the telecommunications system, base station and mobile terminal. Figure 4 shows a time-domain overview of uplink resources:
123. In Figure 4, a time period corresponding to a first Pilot Allocation Time Duration or PST (PST1) is shown to comprise “N” time frames (labelled FR1 to FRN). Reverting to references to EP689, paragraph [0068] explains that each time frame is composed of two time slots: a pilot time slot portion (labelled in the patent as PS) and a data packet time slot portion (labelled in the patent as PCK).
124. The pilot time slots are used by the mobile terminal to transfer the pilot signal which has been allocated to the mobile terminal by the base station. Thus, in a PS time slot, the base station may receive pilot signals from multiple mobile terminals.
125. As explained in paragraph [0072], the pilot symbols allocated to a mobile terminal to send in PST1 may be equal to or different from those allocated to the same mobile terminal to send in PST2.
126. Paragraph [0069] explains that, within the described invention, the mobile terminal may convey information to the base station regarding the status of pending uplink data by means of adjusting the power of the pilot signals sent within the PS time slots of a PST: “The transmission power of the signals representatives [sic] of the pilot symbol pattern is adjusted according to information associated to data to be transferred”.
127. Paragraph [0069] does not explain further the detailed nature of how the “adjustment” is derived from the information to be transferred, which the skilled person would understand could take a number of forms. However, paragraph [0070] describes one approach in which it is explained that “if the transmission power of the signals representatives of the pilot symbol pattern is set to null value for a time slot PSn with 1 ≤ n ≤ N, the transmission of the signals representatives of the pilot symbol pattern can be also understood as a non-transmission of the signals in the time slot PSn.”. In other words, one type of adjustment is to not transfer the pilot signal at all (or to set the power to null, which is the same thing). From the description of Figure 5, and also for example from paragraph [0115] in respect of Figure 6, the skilled person would understand that this provides a binary indication as to whether or not there is pending data at the mobile terminal and that this information is conveyed by the presence or absence of the pilot signal.
128. An alternative (or additional) approach is where, in the case that the pilot signal is present (i.e. when it does not have a power set to zero) its power is adjusted according to a range of power levels. This could then provide further information, for example, on the Quality of Service (QoS) requirements of the pending data. These two approaches are expanded on in the context of Figure 5.
129. Figure 5 in each Patent shows the algorithm operated in the mobile terminal but omits the Y/N indications at each decision point. With those added, Figure 5 looks like this:
130. In steps S500 - S501, the mobile terminal requests and receives a pilot symbol pattern from the base station and an allocation time for transferring the pilot signal. In subsequent steps S502 - S506 (as described at paragraphs [0079] - [0085] and [0095] - [0096]) the power of the pilot signal is adjusted by the mobile terminal to one of a high value (P1), a lower value (P2) or to zero (a null value) in steps S504, S505 and S506 respectively, according to the decision flows at steps S502 and S503. If there is a packet to transfer at S502, the pilot signal power is set to a power, P1 or P2, depending on whether the required QoS is high or low (see paragraphs [0082] - [0085]). On the other hand, if the device determines at step S502 that there is no packet to transfer, then the mobile terminal moves to step S506 and the power of the pilot signal is set to zero. This is therefore implementing the approach introduced in paragraph [0070] and which is further mentioned in paragraph [0115] (which I describe below), wherein the presence or absence of the pilot signal indicates to the base station whether or not the mobile terminal has data packets to transfer (as I have noted, the 689 patent equates (at paragraph [0070]) the transmission of a pilot signal at zero power with a non-transmission of the pilot signal). This achieves the benefits of avoiding the inefficiencies associated with mobile devices sending pilot signals over the entire allocation period even when they have no data to transfer.
131. Following the power adjustment at steps S504 and S505 (or decision to set a null value at S506), at S507 the pilot signals are transferred in a PS time slot (or not transferred as the case may be). Thereafter, the mobile terminal checks (at S508) as to whether a message has been received from the base station authorising it to transfer a packet (paragraph [0088]) and if so, the packet is transmitted on the uplink in step S509.
132. A check is then made to determine whether the PST duration has ended (S510). If the PST is ongoing, the algorithm returns to S502 and iterates to set the pilot signal power for a subsequent pilot transmission opportunity (or pilot time slot, PSn). As described in paragraphs [0095] - [0098], if all the data has been transferred in step S509, the system will progress to step S506 and set the power of the pilot signal to a null value (in other words, it will stop transmitting the pilot signal) to indicate to the base station that it no longer has data to transmit. Thus the mobile terminal will continue to indicate whether or not it has data to transfer for the duration of the pilot allocation time, and in response may receive (or not) authorisation from the base station to transfer data. Such authorisations will cease when all data has been transmitted, because the mobile terminal will cease to transmit the pilot signal.
133. Conversely, if the PST has ended, the mobile terminal optionally checks (at S511) whether another PST has been allocated and if so, continues in a similar fashion within the new PST. If at S510 the PST had ended, the mobile terminal no longer transmits pilot signals on the PS time slots: it therefore proceeds to step S512, and if further uplink data is present, must instead first request another pilot symbol pattern by sending a message to the base station (S500). If, at this stage, no data is present, the mobile terminal ends the algorithm, which may then be initiated at a later point in time if further data arrives in the transmission queue (paragraph [0102]). As explained at paragraph [0103], the check at S511 may be left out, in which case once the PST is ended the mobile terminal moves straight to step S512.
134. Figure 6 of each patent illustrates a corresponding algorithm for the base station, which is described at paragraphs [0104] - [0126].
135. At the start of the algorithm, the base station (through the processor 200) receives messages from mobile terminals that need to transfer data, requesting an allocation of a pilot symbol pattern (S600). The base station identifies each device that sent a request message (see paragraph [0106]), and pilot symbol patterns are then allocated to each mobile terminal that did (S601). As described at paragraph [0109], the base station also activates the PST.
137. Within a PST, the base station receives (S602) pilot signals from the mobile terminals and analyses these at step S603. An example is provided in paragraph [0111] wherein the base station may use the pilot signals to determine channel conditions by means of measuring the received power for each of the pilot signals.
138. At step S604, the base station selects a mobile terminal “to which next uplink time frame is allocated”. By this the skilled person would understand the description to be referring to the allocation of resource in the next time frame for the mobile terminal to send data. This is said in paragraph [0112] to be done “using the channel conditions”, but paragraphs [0113] - [0114] provide some further detail by means of an example. In the example of the scheduling operation given in paragraph [0113], the mobile terminal with the highest received pilot signal power is selected.
139. Paragraph [0114] describes a further variant, wherein “if each second telecommunication device 20 sets the transmission power of the signals representative of a pilot symbol pattern according to the packet it has to transfer, the probability that the first telecommunication device 10 allocates the next time frame to a second telecommunication device 20 which has a packet which has an associated high quality of service is increased”. The skilled person would understand this to be referring to an alternative approach, as seen in Figure 5, wherein resources are preferentially allocated to those mobile terminals with data requiring a high quality of service (which again may be indicated by the mobile terminal increasing the power of the pilot signal transmission).
140. Paragraph [0115] then explains that: “as each second telecommunication device 20 sets the transmission power of the signals representative of a pilot symbol pattern to null value when no packets need to be transferred, the first telecommunication device 10 never allocates the next time frame to a second telecommunication device 20 which has no packet to transfer, optimizing then the resources of the wireless network 15”. This is the approach described in general terms in paragraph [0070] and again in the context of Figure 5, wherein the mobile terminal indicates whether it has any packets to transfer by the presence or absence of the pilot signal: if not, the mobile terminal indicates that to the base station by setting the power to a null value, which avoids the inefficiencies of unnecessary transmission of the pilot signal as I have described.
141. Paragraphs [0116] - [0125] then describe further ways in which the base station takes into account the presence or absence of a pilot signal (i.e. whether it had power set to a null value or not) and explains that this may be used to determine whether a given mobile terminal has data packets remaining to transfer at the end of the PST. In the scheme that is described, at step S605, the base station registers if a pilot signal has not been received, which occurs when a mobile terminal sets the transmission power to a null value. At step S606 the base station sets a binary value M(k) for each kth mobile terminal based on whether a pilot signal for that user was received or not. For those mobile terminals from which a pilot signal was received, the base station sets a value of M(k)=1 whilst for those mobile terminals for which a pilot signal was not received, the base station sets M(k) to a null value - i.e. M(k)=0. Paragraph [0120] explains that M(k) therefore indicates those devices that have no more data to send at the end of the PST (though as explained at paragraph [0115] receipt of a null value at an earlier point in time within the PST is used to ensure that the base station does not allocate resources to that mobile terminal while the PST is ongoing).
142. At step S607, if the PST has not yet ended the algorithm proceeds back to step S602. At the end of the PST, at step S608, M(k) is then used to determine those mobile terminals which still have data to send and therefore require another (follow-on) PST (see paragraph [0123]). There is a variant of this approach described in paragraph [0125] in which the base station counts the number of occasions on which a pilot signal (i.e. a non-null value) is received from each mobile terminal. The allocation of a further pilot symbol pattern at step S600 is then based on whether the number of such occasions for a given mobile terminal exceeded a threshold.
143. This case has not involved any arguments as to equivalents, so my task is to undertake a ‘normal’ interpretation of the claims: see Eli Lilly v Actavis UK Ltd [2017] UKSC 48. This is a very familiar test, but I am reminded that it remains an exercise in purposive construction (Icescape Ltd v Ice-World International BV [2018] EWCA 2219 at [60] per Kitchin LJ (as he then was). It is an objective exercise and the question is always what a skilled person would have understood the patentee to be using the words of the claim to mean.
144. In this case, the terms in issue are underlined in what follows. Claim 1 of the 689 patent, broken down into integers, reads:
[A] Method for transferring of data from a mobile terminal to a base station once a wireless resource enabling the transfer of data from the mobile terminal to the base station has been allocated, the method comprises the steps executed by the mobile terminal of:
[B] - transferring to the base station information indicating the need or not of a wireless resource for the transfer of data from the mobile terminal to the base station based on pilot symbols provided by the base station,
[C] - receiving, in response to the transferred information, from the base station information indicating that a wireless resource is allocated to the mobile terminal,
[D] - transferring to the base station data in the wireless resource indicated as allocated to the mobile terminal.
145. Claim 1 of the 259 patent is as follows, again broken down into integers:
[A] Method for scheduling the transfer of data from a mobile terminal to a base station in a telecommunication system, characterised in that the method comprises the steps executed by the mobile terminal of:
[B] - determining if resource of the telecommunication system needs to be allocated to the mobile terminal for transferring data to the base station,
[C] - transferring a pilot symbol before the data transmission if resource of the telecommunication system needs to be allocated to the mobile terminal for transferring data to the base station.
146. As indicated above, the issues of construction arise on integer B for EP689 and on integer C for EP259. The Defendants say that ‘pilot symbol(s)’ has the same meaning in both patents. It is convenient to address the interpretation of that term in claim 1 of EP259 first.
147. As Mr Bishop pointed out, the specifications identify three purposes for pilot symbols:
i) First, to identify the mobile terminal with the best uplink radio channel and thereby efficiently allocate uplink resources for data transmission to mobile terminals that are best able to use them;
ii) Second, to indicate QoS. The mobile terminal selects the transmission power of the signals representative of a pilot symbol pattern based on the Quality of Service of the data it has to send;
iii) Third, as a request for uplink resources. The mobile terminal stops sending signals representative of a pilot symbol pattern when it no longer has data packets in the transmission queue, in order that the base station does not allocate resources unnecessarily to mobile terminals that no longer have data to send.
148. It seems to be common ground that the first of these purposes was a conventional purpose of such symbols (for channel dependent scheduling) at the priority date, whereas the second and third purposes are the additional ones presented by the patents (even though the claims 1 are only concerned with the third purpose).
149. The Claimants’ argument is simple: the term ‘pilot symbol’ bears the ordinary meaning of a pilot or reference signal i.e. a pre-defined signal known to both transmitter and receiver (see paragraph 51 above). The Defendants professed a great deal of forensic bafflement as to what the Claimants’ position was, suggesting that the Claimants’ argument amounted, in effect, to saying that any signal can be a pilot and, similarly, that the Claimants’ interpretation included a mere request message (e.g. the step (1) message in Kwon). However, it is clear that the Skilled Person would have readily understood what the Patents were talking about when they referred to pilot or reference signals.
150. What a mere request message and a reference signal have in common is that the signal must be known to both transmitter and receiver (e.g. so that the receiver knows it has received a request for resources). At one level, both signals have some sort of predefined structure for that purpose, although the structure of a mere request message merely requires it to be recognised as such. However, in my view there remained a clear distinction in the mind of the Skilled Person between a mere request message for resources and a pilot or reference signal. A pilot or reference signal has a predefined structure (known to both transmitter and receiver) also designed to have good auto-correlation properties (i.e. to have zero or a very low correlation with itself at non-zero offsets). It is this latter characteristic which made the reference signal capable of carrying out the various purposes for which reference signals were used. A mere request message would not have that capability. I realise that the Agreed CGK was that reference signals ‘commonly’ had ideal auto-correlation properties (see paragraph 53 above) but, in the context of these Patents, the Skilled Person would expect the pilot or reference signals to have at least good auto-correlation properties.
151. The Defendants argue that, in the context of the teaching of the specification, the term must be understood as referring to something which is used not only as a request for uplink resources but also for its conventional purpose of channel quality determination.
152. The Defendants start from the agreed CGK meaning of pilot or reference signal (see paragraph 51 above). They place particular emphasis on ‘reference’ and say the signal must be used as a reference signal. In the context of the Patents, they say the use in question is as a channel quality pilot.
153. There are two main strands to this argument. First, the Defendants focus on the expressions ‘another purpose’ in [0011] and [0014], ‘further process’ also in [0014] and ‘another information’ in [0015]. They contrast this with the statement in [0010] and stress the word ‘only’ as I mentioned above. However, the skilled person would know it was not the case that pilot symbols were only used for channel quality determination. In effect, the Defendants say that ‘another/further’ means ‘additional’ as opposed to ‘different’. They point to the first purpose identified - a channel quality pilot. Hence they say that the pilot symbol of the claims is a channel quality pilot used for another or additional purpose of requesting uplink resources.
154. The Defendants also focus on the claimed advantages of the invention. They identify three:
i) Efficient use of network resources [0015] because the pilot signal is used to convey ‘another information’;
ii) The base station is precisely informed if data has to be transferred or not [0017]; and
iii) The mobile reduces its electric power resources consumption by not transmitting channel quality pilots when it has no data to transmit, relying on [0018].
155. They then say the first advantage arises from using the channel quality pilot both for that purpose and also for indicating a need for resources. They say the other two are not really benefits of the invention on the basis that any sensible system would feature them.
156. In my view, the Defendants’ arguments on the claimed advantages entail a rather selective reading of the specification, assuming the result they seek rather than independently indicating that is the right result.
157. In any event, the Defendants’ argument places far too much weight on the way that the patentee chose to introduce and describe his or her invention, distinguishing it from the particular prior art system described in the Patent. This is not quite the same vice as attempting to read into the claim a limitation which features only in an embodiment, but it is close to that. I recognise that the Defendants are relying on the general teaching about what the invention is. Nonetheless, once the description is understood, in my view it is clear that the claims do not require the use of a channel quality pilot being used for the stated purpose of requesting uplink resources. The distinction is narrow but real. The reference signal may be capable of being used as a channel quality pilot but the claims 1 do not require the reference signal to be so used. If the claims 1 were limited to channel quality pilots being used to request uplink resources, the skilled person would expect the claims to have said so. The fact that there is no hint at all in the claims to ‘channel quality pilot’ indicates to me that ‘pilot symbol’ in the claims is not being used in any special sense, as the Defendants argue, but simply in its ordinary sense of a pilot or reference signal.
158. It is common ground that the term ‘pilot symbol’ has the same meaning in claim 1 of EP689 as in claim 1 of EP259, but in any event my finding is reinforced by consideration of claim 1 of EP689. In that method, what is transferred from the mobile to the base station is ‘information….based on pilot symbols provided by the base station’. Nothing in this claimed method requires the pilot symbols to be ‘channel quality pilots’.
159. Accordingly, I reject the Defendants’ arguments on the meaning of ‘pilot symbol(s)’. I am left with the impression that the principal driver for those arguments was the desire to find a non-infringement point, as opposed to being based on an objective assessment of the claims.
160. There is one point to add. The experts agreed that the Patents proposed a new use for reference or pilot signals. At this point it is worth noting, principally because of the arguments I come to later on infringement/essentiality, that whereas traditionally reference or pilot signals did not convey information from higher layers, the new use proposed in the Patents is that reference signals are used to convey information from higher layers: specifically a need for uplink resources.
161. In their opening skeleton, the Defendants made a number of points about these words in claim 1 of 689, which the Claimants addressed in their oral opening. It is not entirely clear whether the Defendants maintain those points - they were not mentioned at all in the Defendants’ closing submissions - and they appear to have fallen away in the light of the evidence on Kwon.
163. The Defendants say that claim 1 of 689 requires information to be transferred which positively indicates the lack of need for uplink resources.
164. In this regard both parties rely on claim 3. Claim 3 of 689 claims a method according to claim 2 (itself dependent on claim 1) characterised in that information indicating the ‘not need’ of a wireless resource for the transfer of data from the mobile terminal to the base station is represented by a signal having a power set to null value.
165. The Defendants submit that the subsidiary claim 3 involves a limitation on claim 1, so the argument goes: claim 1 must be broader. Although subsidiary claims are very frequently narrower than the principal claim, this is not a universal rule. It depends on the circumstances. The Defendants’ construction of claim 1 would have very odd consequences. First, the example embodiment would fall outside claim 1. Second, it would also mean that claim 3 fell outside the scope of claim 1. Third, it would seem to defeat the purpose of the whole invention in which the mobile sends a pilot or reference signal to indicate a need for uplink resources, whereas (on the Defendants’ argument) when it does not need resources it has to transmit (presumably periodically) a positive message indicating no need for resources. This does not make technical sense.
166. The more likely construction is that, on this ‘not need’ point, claim 3 does not introduce an additional limitation to claim 1, and they mean the same thing. Even if I am wrong about this, the same result is achieved by claim 3, even if the ‘not’ need must be indicated by silence detected periodically at the base station, due to the dependence on claim 2. In relation to claim 1 of EP689 I find that the ‘not’ need can be indicated by silence.
168. In terms of ‘pilot symbols provided by the base station’, the Defendants accepted Mr Anderson’s explanation of this requirement:
‘The patent contemplates that the signal sent by the mobile terminal may be based on a pilot symbol pattern provided by the base station, or on information provided by the base station enabling the mobile terminal to identify the pilot symbol pattern (see paragraph [0053], which corresponds to [0046] of the 259 patent).’
169. As explained in [0053], it can be ‘an information, like an indicia, identifying the pilot symbol pattern’. This reflects the purpose of the pilot symbol pattern: it is a unique identifier of the mobile to the base station, as allocated by the base station.
170. In terms of ‘information….based on’ those pilot symbols, an issue of construction emerged in one of the Defendants’ arguments on added matter. I deal with this issue of construction in that context - see paragraph 365 below.
171. The Claimants allege infringement of the 259 and 689 patents by sales of mobiles of the various groups of Defendants, namely OnePlus mobiles (of the Fourth, Fifth and/or Sixth Defendants), Oppo mobiles (of the Seventh and/or Eighth Defendants) and Xiaomi mobiles (of the Ninth, Tenth, Eleventh and/or Twelfth Defendants). The allegation of infringement is based on the essentiality of the claims of the 259 and 689 patents to the LTE standard. Each group of Defendants has admitted that the aspects of the LTE standard relied upon are mandatory and are implemented by their mobiles. Further, the Defendants do not advance a case that there are any material differences between Release 8 of the LTE technical specifications relied upon and any later versions and releases. Accordingly, the sole issue for this trial is whether the claims of the 259 and 689 patents, if valid, are essential to Release 8 of the LTE standard.
172. More specifically, for Release 8 the following specifications were used by the experts, but principally the first three:
Physical Layer
3GPP TS 36.211 v8.8.0 “Physical Channels and Modulation”
3GPP TS 36.212 v8.7.0 “Multiplexing and Channel Coding”
3GPP TS 36.213 v8.8.0 “Physical Layer Procedures”
Medium Access Control Layer
3GPP TS 36.321 v8.7.0 “Medium Access Control (MAC) protocol specification”
Radio Resource Control Layer
3GPP TS 36.331 v8.7.0 “Radio Resource Control (RRC); Protocol specification”
Radio Interface System Overview
3GPP TS 36.300 v8.10.0 “[E-UTRA and E-UTRAN]; Overall description; Stage 2”
173. More specifically, the question is whether claim 1 of the 259 patent and/or claim 1 of the 689 patent are essential to the functionality of the LTE standard concerned with the transmission by the UE of a scheduling request (SR) by signals known as PUCCH format 1, PUCCH format 1a and PUCCH format 1b that are sent by the UE on a resource known as sr-PUCCH, where the PUCCH is the Physical Uplink Control CHannel. It is the Claimants’ case that they are.
174. Although the signals in question are reasonably complex, ultimately I did not detect any dispute as to how they are constructed or transmitted. Rather the issues concern the correct characterisation of various parts of these signals.
175. Ultimately, much of the dispute comes down to a narrow point. In summary, the Defendants rely on the language used in the relevant LTE specification and (a) the way in which it describes the DMRS (DeModulation Reference Signal) separately from the PUCCH, and (b) that it says that it is the PUCCH that signals SR. The Claimants say, in effect, the language on which the Defendants rely is rather loose. They say one has to look at the signal which the specification says must be sent (call it signal X). They say that if signal X is in fact a signal of the type required by the claims, then the fact the standard may describe signal X differently is neither here nor there.
176. Mr Anderson adopted the shorthand ‘format 1x’ for the PUCCH format 1/1a/1b signals sent on the sr-PUCCH resource and I will do the same, although it is necessary to distinguish between format 1 and 1a/1b signals at various points. Each of the PUCCH format 1x signals is made up of DeModulation Reference Signals (DMRS) and PUCCH data symbols (also referred to as the d(0) region).
177. In essence, the Claimants’ case is that a positive SR is signalled by the presence of a PUCCH format 1, 1a or 1b signal on its assigned sr-PUCCH resource. Conversely, the absence of a signal on that resource indicates a negative SR. The Claimants say that the whole of a PUCCH format 1/1a/1b signal is a reference or pilot signal or comprises such signals.
178. Turning to the claims (and it is common ground that it is only necessary to consider claim 1 in each patent), on EP259 claim 1, the Defendants admit that integers A and B are implemented by the LTE specifications relied upon. The issues arise in relation to integer C where the Defendants raised a number of points in their opening skeleton argument:
i) First, they contend that PUCCH format 1x signals neither are nor include a ‘pilot symbol’;
ii) Second, they contend a DMRS may be sent in circumstances in which resources are not needed, for example a PUCCH format 1a or 1b signal that does not include a Scheduling Request;
iii) Third, UEs also send an SRS, but that this is sent irrespective of whether or not resources need to be allocated.
179. So far as EP689 claim 1 is concerned the Defendants admit integers A and D are implemented by the Specifications. The same issues arise under integers B and C and are as follows:
i) First, the base station sending ‘configuration parameters’ such as the SchedulingRequestConfig IE does not satisfy the requirement that ‘pilot symbols’ are provided by the base station.
ii) Second, the signalling or non-signalling of the SR is not information ‘based on pilot symbols’, nor are the PUCCH format 1x signals ‘pilot symbols’.
iii) Third, insofar as any relevant information is transferred by the format 1x signals, it is not transferred by the DMRS.
iv) Fourth, the absence of signalling of the SR is not ‘information’ that is transferred to the base station. This, as I understand it, is the construction issue as to whether the ‘not’ need can be indicated by silence.
v) Fifth, a DMRS may be sent in circumstances where resources are not needed. This is the same point as made in relation to EP259.
vi) Sixth, UEs send an SRS but this is sent whether or not resources need to be allocated. This is the same point as made in relation to EP259.
180. Whilst some of those points are clear enough, some became much more elaborate as the arguments progressed, not least because of the squeeze arguments which the Defendants put forward based on Kwon. In closing, the Defendants made some additional points, so I found it convenient to group the Defendants’ various points as follows. Some I can deal with immediately, but I address the principal points later, after I have explained the technical detail of the signals relied upon.
181. First, so far as the PUCCH format 1x signals are concerned, the Defendants contend that SR is signalled only by the symbols in the d(0) region, which they say are not reference or pilot signals but data symbols, and not by the DMRS symbols, which they accept are reference signals (used for coherent demodulation). Furthermore, the Defendants say that if the DMRS does indicate the need for uplink resources, that situation is indistinguishable from the demodulation pilot symbols that accompany Kwon message (1).
182. The second point is closely related. The Defendants say the DMRS and d(0) regions are separate and distinct, for a number of reasons. Principally, as I understand the arguments, the Defendants rely on certain statements in the standard TS36.211 v8.8.0 to the effect (they say) that the DMRS is associated with transmission of the PUCCH but is not the PUCCH. The Defendants also rely on some detailed processing points that different cyclic shifts and different orthogonal sequences are applied and that the DMRS is not scrambled, whereas the d(0) symbols are. The Defendants emphasise their argument that the Claimants cannot put DMRS and PUCCH together. They say: ‘The whole point of the patents is that it is the pilot, not something else, which indicates the need for resources. LTE has (i) a pilot and (ii) a separate indication of a need for resources: that is the opposite of the claimed invention.’ Again, the Defendants say that if they are wrong about that, the combination is indistinguishable from the message (1) request of Kwon, which would have a demodulation pilot.
183. Third, the Defendants say that the DMRS is sent when no resources are required, which they say indicates that the presence or absence of the DMRS does not indicate SR or no SR.
184. Fourth, the Defendants say the DMRS is not sent at different powers by a UE e.g. to communicate the quality of service of data that is sent.
185. Fifth, the Defendants say that if the DMRS were sent alone and no SR was sent, then the network would not interpret the presence of the DMRS as a need for an uplink resource.
186. Sixth, the Defendants say that even if the DMRS are pilot signals indicating a need for uplink resources, they are not the right sort of pilot signals required by the claim (i.e. they have to be channel quality pilots). I have already rejected this argument in the construction section above and need say no more about it.
187. Seventh, specifically on 689, the Defendants say that there is no positive signal indicating no need for uplink resources. Again, I have already decided this point against the Defendants in the construction section above.
188. Eighth, again on 689, whilst the Defendants accept the DMRS signals are provided by the base station, they say that the d(0) symbols are not.
189. Ninth is a point which is concerned with coherent modulation. As I understand the Defendants’ argument, it is that if all that the reference signal does is coherent demodulation (which it can do because the reference signal is multiplexed with the data signals) such a reference signal does not convey any information (such as a need for uplink resources). As Mr Anderson put it in his reply report: ‘In such cases, the reference signals are not themselves part of the message; they are part of the means by which the message can be understood.’ (a point drawn out in the CGK section at paragraph 65 above).
190. Tenth, the Defendants say that it is not open to the Claimants to rely on the PUCCH Format 1 signals alone because, they say, the Claimants’ pleaded case concerns the PUCCH format 1/1a/1b signals and the Claimants would have to seek permission to amend to rely on PUCCH format 1 alone. I understand the argument, but I do not see that it prevents the Court from finding, say, that, the PUCCH format 1 signals do constitute pilot signals within the claims 1 but that PUCCH format 1a/1b signals do not, if that is the conclusion. Both parties have had ample opportunity to make their points in relation to all three formats and a split finding would not be unfair to either side. However, the point which remains is that the Defendants seek to draw a distinction between the PUCCH format 1 signals and the format 1a/1b signals. This remaining point relates to the previous one and I shall deal with it in that context.
191. Some of the Defendants’ points mix up the PUCCH itself with the actual signal by which an SR is transmitted on the PUCCH. Some, as can be seen from the summary above, seek to distinguish between the DMRS symbols and the d(0) symbols in the PUCCH format 1x signals. Furthermore, it is necessary to be clear as to which parts of any of the PUCCH signals are coherently modulated or not, and whether in any particular context, the DMRS symbols are or are not used to perform coherent demodulation.
192. For these reasons, it is necessary to describe in some detail various aspects of LTE, the PUCCH format 1x signals and how an SR is transmitted. On this part of the case the role of the experts was to extract and translate the relevant parts of the various TSs in Release 8 at an appropriate level of detail to enable the issues to be resolved. As I have already indicated, the technicalities were not in dispute. Instead, the Defendants seek to rely on certain passages in TS36.211, which they say cast matters in a different light.
193. Much of the technical detail which follows is based on Mr Anderson’s reports but with various additional explanations and points provided by Mr Bishop. With that technical detail in mind (which I find as facts), I will then be able to return to address the arguments. That considerable detail is required should come as no surprise, because, as the Patents in suit illustrate, the concept can be stated succinctly whereas explaining how the concept is implemented (or not) in a practical system may be and is much more complicated.
194. What follows I have divided into sections:
i) Part 1 - Basic Concepts in LTE;
ii) Part 2 - Access to UL-SCH Resources;
iii) Part 3 - Signalling SR via PUCCH;
iv) Part 4 - Allocation of UL-SCH resources to UEs;
v) Part 5 - How PUCCH format 1x signals are constructed;
vi) Part 6 - The role of coherent modulation.
195. Having set out the technical detail (derived from various parts of the LTE specifications), I must also deal with the Defendants’ arguments which rest on particular passages in TS36.211 in particular. Hence the final section is:
i) Part 7 - The passages in TS36.211 relied upon by the Defendants.
LTE System Architecture
196. The LTE standards specify how User Equipment, or UEs, can communicate with an Evolved Universal Terrestrial Radio Access Network (E-UTRAN). The E-UTRAN comprises a number of base stations termed evolved Node-Bs (eNBs) as shown in TS 36.300 Figure 4-1 (as reproduced below in Figure 7). The eNBs communicate via the LTE radio interface with the UEs. eNBs may also communicate (using other protocols) with each other (via an “X2” interface), and with a core network (via an “S1” interface). Entities that lie within the core network are labelled MME / S-GW in the figure. The MME (Mobility Management Entity) is responsible for control plane operations whereas the SGW (Serving Gateway) is responsible for routing user plane data.
Figure 7 - Reproduction of TS 36.300 Figure 4-1 - Overall Architecture
Shared Channels in LTE
197. The LTE system was designed from the outset to support only packet based communications. Unlike GSM and UMTS, there are no ‘circuit switched’ services. As a result, the radio interface of LTE adopts an approach that uses only shared channels in both the uplink and downlink directions to convey application data (in the ‘user plane’) and control messages (in the ‘control plane’). This differs from GSM and UMTS in which the radio interface also supports dedicated channels to convey application data (for example to support circuit switched services such as voice).
198. In LTE, radio resources for the uplink and downlink shared channels are allocated amongst users of a cell by a scheduler located within the eNB. For the uplink shared channels, the resources are scheduled in order to facilitate the transmission of data from a UE to the eNB and the UE supports procedures that enable this functionality. In other words, in order for a UE to be able to send either user plane data or control plane information on the uplink, it must first have been allocated a valid grant of uplink shared channel resources.
LTE protocol stack
199. As with other wireless communications systems, the LTE specifications define a number of protocol layers that support the transmission of user plane and control plane messages. Figures 4.3.1-1 and 4.3.2-1 of TS 36.300 (as reproduced below in Figure 8) show, respectively, the user plane and control plane protocol stacks that apply to LTE.
User Plane |
Control Plane |
Figure 8 - Reproduction of TS 36.300 Figures 4.3.1-1 and 4.3.2-1 - User plane and Control plane protocol stacks
Uplink Resources
200. Uplink shared channel information is transmitted by the physical layer, by way of a channel called the Physical Uplink Shared Channel (PUSCH). This physical channel carries user plane and control plane information originating from higher layers of the protocol stacks. At the RLC layer, different ‘logical channels’ may be associated with user plane data and control plane information. These are provided to the MAC layer, wherein they are multiplexed onto a transport channel referred to as the uplink shared channel (UL-SCH). Thus, as part of its operation, the MAC layer in the UE assembles MAC Protocol Data Units (PDUs) from MAC Service Data Units (SDUs - which equate to higher-layer RLC PDUs for the different logical channels that are provided to it). The MAC then sends these MAC PDUs to the physical layer for transmission on PUSCH. An overview of this architecture is shown in Figures 6.1.3.1-1 and 5.3.1.2-2 of TS 36.300. Mr Anderson combined those two figures as shown in Figure 9 below with annotations added in blue to show the flow of the information between the layers.
Figure 9 - Reproduction of TS 36.300 Figures 6.1.3.1-1 and 5.3.1.2-2 (with annotations) - Mapping between uplink logical channels and uplink transport channels and
Mapping between uplink transport channels and uplink physical channels
201. The role of the physical layer is to prepare the data (in the form of MAC PDUs) for transmission over the air (using either OFDM in the downlink or Single-Carrier FDMA (SC-FDMA) in the uplink). OFDM and SC-FDMA both utilize a 2-dimensional resource space in time and frequency as shown in Figure 10. The smallest unit of time-frequency resource is referred to as a Resource Element (RE) and corresponds to one sub-carrier in frequency and one OFDM (or SC-FDMA) symbol in time. Diagrams showing multiple REs are often referred to as ‘resource grids’.
202. REs are grouped into larger time-frequency resource units referred to as Physical Resource Blocks (PRBs). A PRB spans a group of 12 contiguous sub-carriers in frequency and either 6 or 7 OFDM/SC-FDMA symbols in time (referred to as a time slot), depending on configuration. [1] Figure 10 shows the 7 symbol configuration.
203. Each time slot of the system may therefore comprise several PRBs in frequency that together span the overall uplink or downlink system bandwidth. Slots are organised into further, larger, units in time, with two slots forming a 1ms subframe and 20 slots (or 10 subframes) forming a 10ms frame. The LTE system supports a range of system bandwidths from 1.4 MHz to 20 MHz, accommodating 6 to 100 resource blocks per slot.
Figure 10 - Time-Frequency Resources in LTE
Multiple access and orthogonality
204. The frequency resources of the LTE system are therefore divided into sub-carriers and (at a larger scale) into different PRBs. The sub-carriers of an OFDM or SC-FDMA symbol overlap with one another in frequency but, due to the way in which the transmitted waveforms are generated, the sub-carriers are orthogonal.
205. In LTE, it is therefore possible for transmissions in both the uplink and the downlink to be frequency domain orthogonal, wherein different users are allocated different PRBs. The PRBs are orthogonal to one another as they occupy different sets of (orthogonal) sub-carriers. In general therefore, there exists an orthogonal resource space for both uplink and downlink and it is a task of the eNB scheduler to manage and distribute these resources amongst users via the use of dynamically-scheduled uplink and downlink shared channels. An assignment of uplink or downlink shared channel resources typically applies for a duration of one subframe. The assignments are carried to the UE via physical layer control signalling on the downlink.
206. It is further possible in LTE for uplink signals (of some types) from different users to occupy the same PRB. These signals would normally interfere with one another, however the LTE system provides that they are constructed using particular sequences (or codes) and this technique, referred to as code domain multiplexing, is used to preserve their orthogonality even though they share the same PRB. Further detail on this is provided below.
207. Uplink SC-FDMA signals from different UEs remain orthogonal only if they are time aligned upon their arrival at the eNB receiver. As a result, UEs must be time aligned if they are to be permitted to transmit on the uplink shared channel resources. However, to accommodate UEs that are not timing aligned, LTE includes a random access resource. The random access channels are shown as RACH (MAC layer) or PRACH (physical layer) in Figure 9 above.
PUCCH channel
208. In addition to the PUSCH channel and the PRACH channel, LTE contains an additional physical layer uplink channel called the Physical Uplink Control Channel (PUCCH). To transmit on PUCCH a UE must be timing aligned and have been assigned (either dynamically or semi-statically) resources on which to do so. However, while PUSCH may carry user data and some types of control signalling, PUCCH carries only uplink control signalling.
209. Figure 11 illustrates an example of the general structure of uplink resources in LTE, with each small box indicating a single PRB. As shown in Figure 11, the uplink system bandwidth is divided into an inner (or central) PUSCH frequency region and an outer PUCCH frequency region (which is itself comprised of two frequency regions that are located at the upper and lower edges of the uplink system bandwidth). The number of PRBs that are reserved for the PUCCH frequency region is configurable, although this is generally smaller than the PUSCH frequency region. Example allocations of PUSCH resources to different UEs are shown in Figure 11 by means of the differently coloured resource blocks.
Figure 11 - Example of the LTE uplink resource structure
210. When a UE is allocated PUCCH resources, the allocation will take the form of a ‘PRB pair’ from opposite ends of the system bandwidth within a subframe. This is shown in Figure 5.4.3-1 of TS 36.211 (reproduced below in Figure 12, with annotations in blue to provide additional explanation). Figure 12 is essentially a simplified version of Figure 11 above. It shows only one subframe and details only the PRBs of the PUCCH region (that is, the entire PUSCH region is shown as a single box with no further details). The two halves of each PRB pair are shown in Figure 12 as having matching m values.
Figure 12 - Reproduction of TS 36.211 Figure 5.4.3-1 - Mapping to physical resource blocks for PUCCH
211. The transmission of a signal on PUCCH occupies both of the assigned PRBs within a pair. This is used to provide additional robustness to the PUCCH signal by way of frequency diversity.
212. In order for a UE to transmit a MAC PDU to the eNB (i.e. on the uplink), the UE requires a grant of UL-SCH resources. Uplink Buffer Status Reports (BSR) are used by a UE to provide information to the eNB scheduler regarding the volume and nature of pending data at the UE, and that these BSRs are conveyed via MAC signalling. Three different types of BSR are defined in the LTE standard, but the experts agreed that it is only necessary to consider the regular BSR.
213. An illustration showing the possible outcomes of a Regular BSR is shown in Figure 13 below. We are concerned with the situation where uplink resources are not available but the sr-PUCCH resource is configured so the Regular BSR triggers an SR via PUCCH.
Figure 13 - Illustration of the use of BSR and SR in LTE
214. Once an SR has been triggered, it remains ‘pending’ until it is cancelled. The cancellation of an SR occurs when UL-SCH resources for a new transmission (as opposed to a retransmission) are available or (for the case in which the UE has been allocated sr-PUCCH resources) when a maximum number of allowed SR transmissions has been reached. This maximum is configured by the eNB via a parameter referred to as dsr-TransMax.
215. The SR procedure is executed for every TTI, therefore for as long as the UE has a pending SR, it checks each TTI for the existence of the recurring sr-PUCCH resource and signals a PUCCH SR if that resource is present.
PUCCH formats
216. The PUCCH exists to carry physical layer Uplink Control Information (UCI) between the UE and the eNB. The types of UCI that may be carried by PUCCH include:
i) Hybrid ARQ Acknowledgements (“HARQ-ACK”) of downlink transmissions - see paragraph 101 above. Although that description relates to UMTS HSDPA/ HSUPA, the high level concept remains the same for LTE;
ii) Channel Quality Indications (CQI). These are messages used to report information to the eNB relating to downlink channel quality; and
iii) Scheduling Request (SR).
217. In Release 8 of the 3GPP specifications (as referenced above), a number of different PUCCH formats are defined and are referred to as formats 1, 1a, 1b, 2, 2a and 2b. These formats are used to convey the different control information types, or combinations of types. Section 10.1 of TS 36.213 sets out what combinations of information each of the different formats can convey. Amongst these are those that include SR:
“The following combinations of uplink control information on PUCCH are supported:
[…]
- Scheduling request (SR) using PUCCH format 1
- HARQ-ACK and SR using PUCCH format 1a or 1b”
218. Therefore, PUCCH format 1, format 1a and format 1b are the PUCCH signals that can be used to indicate to the eNB that an SR has been triggered in a UE. For convenience, I will refer to these three formats of PUCCH signal sent on sr-PUCCH collectively as ‘format 1x’.
219. From the UE’s perspective, there are three different sub-groups of resources which are used to carry the different UCI combinations via the corresponding PUCCH formats. The eNB provides the UE with information regarding these resources via RRC signalling.
220. An illustration summarising the three different PUCCH resources (and the formats and UCI types that they may carry) is provided in Figure 14 below. The resources n(1)PUCCH,SRI and n(2)PUCCH are semi-statically allocated to the UE (that is, once configured, they continue to exist until released) whereas the resource n(1)PUCCH, which is used for HARQ-ACK signalling, is implicitly (and dynamically) assigned as the result of receiving each downlink message.
Figure 14 - Illustration of PUCCH resource types and the PUCCH formats that they support
222. Thus, PUCCH type 1 resources are used for signalling SR and/or HARQ-ACK. For normal cyclic prefix, PUCCH type 2 resources are used for signalling CQI, either alone using PUCCH format 2, or together with HARQ-ACK using PUCCH format 2a or 2b.
223. A particular PUCCH resource within the PUCCH type 1 resource space is identified and defined by a logical resource index n(1)PUCCH. Similarly, a particular PUCCH resource within the PUCCH type 2 resource space is identified and defined by a logical resource index n(2)PUCCH.
224. Within the PUCCH type 1 resource space, different logical resource indexes are used for different purposes, as shown in Figure 14. So the semi-statically assigned ‘sr-PUCCH resource’ (labelled n(1)PUCCH,SRI) and the dynamically assigned ‘HARQ-ACK resource’ (labelled simply n(1)PUCCH) are both logical resource indexes into the PUCCH type 1 resource space, but the indexes (and therefore the resources) are separate and distinct. As specified in section 10.1 of TS 36.213:
n(1)PUCCH = n(1)PUCCH,SRI (for the sr-PUCCH resource) and
n(1)PUCCH = nCCE + N(1)PUCCH (for the HARQ-ACK resource).
225. As the sr-PUCCH resource and the HARQ-ACK resource are separate resources, they each have a different combination of the PRBs and codes that are used.
226. TS 36.211 section 5.4.1 states that for PUCCH format 1 “information is carried by the presence/absence of transmission of PUCCH from the UE”. A PUCCH transmitted on the assigned sr-PUCCH resource may be a format 1 transmission (when SR is carried on its own) or a PUCCH format 1a/1b transmission (when SR is carried together with HARQ-ACK). This latter case is addressed in section 7.3 of TS 36.213, which states that:
“For FDD, when both ACK/NACK and SR are transmitted in the same sub-frame a UE shall transmit the ACK/NACK on its assigned ACK/NACK PUCCH resource for a negative SR transmission and transmit the ACK/NACK on its assigned SR PUCCH resource for a positive SR transmission.”
228. SR and HARQ-ACK are sent together (via a single transmission using format 1a or 1b on the sr-PUCCH resource) whenever there is a need for their simultaneous transmission in the same sub-frame. Which PRB and combination of codes is used is determined from the configured sr-PUCCH resource. It is the use of the sr-PUCCH resource (rather than the HARQ-ACK resource) that distinguishes a message conveying an SR and ACK/NACK from one conveying simply ACK/NACK information.
229. Some PRBs of the PUCCH region may be wholly occupied by the PUCCH type 1 resource space, whilst others may be wholly occupied by the PUCCH type 2 resource space. The sizes of the PUCCH type 1 and type 2 resource spaces are configured on a cell-wide basis. The PUCCH type 2 resource space occupies one or more PRBs towards the ‘outer’ edge of the PUCCH frequency region whereas the PUCCH type 1 resource space occupies one or more PRBs towards the ‘inner’ edge of the region. However, it is also possible for the system to be configured such that a PRB (in the ‘crossover’ region between the two) contains a mix of PUCCH type 1 resources and PUCCH type 2 resources. In this ‘mixed PRB’ case, PUCCH format 2, 2a or 2b transmissions use only some of the available cyclic shifts whilst PUCCH format 1, 1a or 1b transmissions use the others. I explain why this is possible in paragraph 240 below.
230. Figure 15 below illustrates an example of how the different PUCCH resources are used by the UE. In the figure, the vertical axes of the grids shown are intended as a convenient way of representing separate logical resources for PUCCH - i.e. a particular combination of PRB, cyclic shift and (for the PUCCH type 1 resource space) orthogonal spreading sequence; they are not intended to represent the actual physical resource blocks used for those transmissions. The horizontal axes represent time (in sub-frames).
231. The figure shows a periodic resource, n(2)PUCCH, that sits within the PUCCH type 2 resource space and which is used by the UE for CQI reporting purposes. [2] The UE is also semi-statically assigned a particular PUCCH resource, n(1)PUCCH,SRI, that recurs periodically within the PUCCH type 1 resource space, and which is used to transmit SR (with or without a simultaneous HARQ-ACK). [3] Further, the UE has access (when required) to HARQ-ACK resources in a separate region of the PUCCH type 1 resource space (beginning at N(1)PUCCH), that are dynamically assigned to facilitate the transmission of HARQ-ACK without SR.
Figure 15 – Illustration of PUCCH transmissions on different PUCCH resources
232. Mr Anderson explained this figure in the following passage, which was not challenged. In the example shown in
233. Figure 15Figure 15, the UE has an opportunity to transmit SR on the 1st sub-frame but does not do so as no SR has been triggered. On the 3rd, 4th and 6th sub-frames, the UE is required to return HARQ-ACK following its receipt of downlink data. For the first two of these, no SR has been triggered (and even if one had been, there is no SR opportunity available) and the UE therefore conveys the HARQ-ACK information by transmitting PUCCH format 1a or 1b (as labelled in the blue ovals) on particular HARQ-ACK resources. [4] By the 6th sub-frame however, an SR has been triggered and the UE is also required to transmit HARQ-ACK information. The UE therefore transmits a PUCCH format 1a or 1b signal on the assigned n(1)PUCCH,SRI resource. In the 11th sub-frame, there is a further opportunity to send an SR. Because the UE has not yet been allocated uplink shared channel resources (in response to the SR that was transmitted on the 6th sub-frame) and because the UE does not have any HARQ-ACK information to send, it therefore transmits a PUCCH format 1 signal on its n(1)PUCCH,SRI resource. Therefore, as explained above, it is the presence of a PUCCH format 1, 1a or 1b signal on the assigned n(1)PUCCH, SRI resource that indicates a positive SR to the eNB.
234. The term ‘demodulation reference signal’ (DMRS) refers to one of two types of reference signal that are defined by the LTE specifications, the other type being the ‘sounding reference signal’ (SRS). Section 5.5 of TS 36.211 lists these two types and explains that DMRS are associated with a transmission on either PUSCH or PUCCH whereas SRS are not. A variety of different PUSCH and PUCCH transmissions are possible and each may comprise a different DMRS. For example, PUSCH transmissions comprise a DMRS spanning the same PRBs that have been dynamically assigned for PUSCH via the uplink grant, whereas the various different PUCCH formats and UCI combinations each comprise a different respective DMRS that spans the PUCCH resource (and code) that is to be used for that particular transmission type (as shown above in Figure 14).
235. As DMRS are associated with a transmission on either PUCCH or PUSCH, they occupy the same range of frequencies as the remainder of the same PUCCH or PUSCH transmission (that is, within particular PRBs of the respective PUCCH or PUSCH frequency region). SRS on the other hand is not associated with a PUCCH or PUSCH transmission and is transmitted within a range of frequencies that is configured by the eNB and which lies within the PUSCH frequency region. To support adaptive modulation and coding and/or channel dependent scheduling for the uplink shared channel (UL-SCH), the eNB would typically use information obtained from either PUSCH DMRS or SRS (if configured). These are suitable for this purpose as they lie within the PUSCH frequency region.
237. For PUCCH format 1x, the signal transmitted in each half of the PRB pair follows the same general structure. For the purposes of explanation, it is therefore sufficient to consider the structure of the signal within a single PUCCH PRB. As shown in Figure 16, this consists of two regions: (i) the Demodulation Reference Signal (DMRS) region; and (ii) the remainder, which was referred to by Mr Anderson and at trial as the d(0) region. For the case of ‘normal cyclic prefix’, in which there are 7 symbols per 0.5ms slot, the central 3 symbols of the slot are occupied by a DMRS, while the remaining 4 symbols comprise the d(0) region.
Figure 16 - Illustration of PUCCH Formats 1, 1a and 1b within a Physical Resource Block
238. Thus, each slot of a transmitted PUCCH sub-frame includes some SC-FDMA symbols (constructed in accordance with section 5.5.2.2 of TS 36.211) that carry DMRS symbols and some SC-FDMA symbols (constructed in accordance with section 5.4 of TS 36.211) which carry modulation symbols (though this is not universally the case - in some cases modulation symbols are also carried by DMRS symbols, see further below).
239. TS 36.211 section 5.4.1 defines that for PUCCH format 1, d(0) is equal to a fixed value of 1 and therefore the signal in the d(0) region has a predetermined waveform (in the same way as for the DMRS region). For PUCCH formats 1a and 1b, the d(0) field is used to carry 1 or 2 bits (respectively) of HARQ Acknowledgement information. This is shown in table 5.4.1-1 (which is reproduced in Figure 17 below), which sets out that d(0) may assume one of 2 values {-1,+1} for PUCCH format 1a and one of 4 complex values for PUCCH format 1b {+1, +j, -1, -j}.
PUCCH format |
|
|
1a |
0 |
|
1 |
| |
1b |
00 |
|
01 |
| |
10 |
| |
11 |
|
Figure 17 - Reproduction of TS 36.311 Table 5.4.1-1 - Modulation symbol d(0) for PUCCH formats 1a and 1b
240. In Release 8, two different structures are used for PUCCH signals, the first for PUCCH formats 1, 1a and 1b, and the second for PUCCH formats 2, 2a and 2b. The structures differ in terms of which SC-FDMA symbols are those designated as DMRS symbols within each slot. For normal cyclic prefix, there are 3 DMRS symbols per slot for formats 1, 1a and 1b and 2 DMRS symbols per slot for formats 2, 2a and 2b. This is shown in Figure 2 below in which the remaining SC-FDMA symbols of the sub-frame are labelled ‘non-DMRS’. In the ‘mixed PRB’ case to which I referred in paragraph 229, this means that DMRS symbols of PUCCH formats 1, 1a or 1b coexist in the same SC-FDMA symbol as non-DMRS symbols of PUCCH formats 2, 2a or 2b (and vice versa). This is possible because both are constructed using (different cyclic shifts of) the same underlying frequency-domain codes.
241. The number of modulation symbols that are carried in a PUCCH sub-frame is a function of the PUCCH format as shown in Table 1 below.
PUCCH Format |
Modulation symbols |
Modulation Symbol Notation |
1, 1a, 1b |
1 |
d(0) |
2 |
10 |
d(0), d(1), …, d(9) |
2a, 2b |
11 |
d(0), d(1), …, d(10) |
Table 1 – Modulation symbols carried by different PUCCH formats
242. Figure 18 below shows which particular SC-FDMA symbols of the sub-frame are used to carry the modulation symbols d(0), d(1), …, d(10). Note however that for diagrammatical clarity, the notation ‘dn’ has been used in lieu of ‘d(n)’ and the time/frequency spreading that is applied when constructing the signals of a PUCCH sub-frame is not shown.
Figure 18 - PUCCH format structures
243. As can be seen in Figure 18, the modulation symbols are usually carried by non-DMRS SC-FDMA symbols of the sub-frame. However, the principle of mapping modulation symbols to only non-DMRS symbols is not always adhered to, as is evident for example in PUCCH formats 2a and 2b, which allow simultaneous transmission of both CQI and HARQ-ACK information. For these formats, the first 10 modulation symbols, d(0), ... d(9), carry the CQI whilst the 11th modulation symbol, d(10), carries the HARQ-ACK. The CQI information is phase modulated and is mapped to the set of non-DMRS symbols. However, the HARQ-ACK information is instead used to phase-modulate the second DMRS symbol of each slot (whilst the first DMRS symbol of each slot remains unmodulated). In this way, the ACK/NACK information is coherently signalled via the second DMRS symbol and the overall information content is therefore carried via a mix of SC-FDMA symbol types (i.e. those designated as 'non-DMRS' and those designated as 'DMRS').
246. The general principle of 2D spreading is illustrated by means of a generic example in Figure 19 in which a frequency domain spreading sequence (F1, F2, F3, F4) of length 4 is used in combination with a time domain spreading sequence (T1, T2, T3, T4, T5) of length 5 in order to produce a 2-dimensional code of size 20. As can be seen, the 2D code is the product of the two constituent sequences. The length of each sequence is often referred to as its spreading factor (SF).
Figure 19 - Principle of 2-dimensional spreading in frequency and time
247. For PUCCH format 1x, a length-12 spreading sequence is used in the frequency domain such that the signal occupies 12 sub-carriers of a PRB. The sequence is represented by in section 5.4.1 of TS 36.211 and is specified in more detail in sections 5.5.1 and 5.5.1.2 of the same specification. The sequence results in a signal in the time domain that has ideal autocorrelation properties. As a result, if an operation called ‘cyclic shifting’ is carried out on the sequence, the original and cyclic shifted versions are orthogonal to one another.
248. Cyclic shifting, as the name suggests, involves shifting the position of values in the sequence by a predetermined amount (the cyclic shift that is applied), whilst ensuring that those that then drop off the end are placed at the beginning. This principle is illustrated in Figure 20 below for an example in which a cyclic shift of 2 is applied to a length 12 sequence:
Figure 20 - Principle of cyclic shifting of a sequence
249. In this way, the sequence can be used to create a set of up to 12 mutually orthogonal sequences (each with a different cyclic shift denoted by the variable α). The same code structure (based on ) is used for frequency domain spreading in both the DMRS region and the d(0) region.
250. For the time domain dimension, different spreading sequences of length 2, 3 and 4 are defined. For the DMRS region, length 3 sequences are used for normal cyclic prefix and the length 2 sequences are used for extended cyclic prefix. These ‘orthogonal cover’ sequences for the DMRS region are denoted by variable as specified in section 5.5.2.2.1 and Table 5.5.2.2.1-2 of TS 36.211 (reproduced in Figure 21 below).
Sequence index |
Normal cyclic prefix |
Extended cyclic prefix |
0 |
|
|
1 |
|
|
2 |
|
N/A |
Figure 21 - Reproduction of TS 36.311 Table 5.5.2.2.1-2 –
Orthogonal sequences for PUCCH formats 1, 1a and 1b
251. For the d(0) region, time domain codes of length 4 and 3 are defined. Length 4 is used in the normal case for both slots of a PRB pair within a subframe. However, the specifications also support a ‘shortened’ PUCCH format 1x in which the last time domain symbol of the second slot of the PRB pair is omitted and, for this slot, the d(0) region then uses length 3 codes. The sequences for the d(0) region are denoted by variable and are specified in Tables 5.4.1-2 and 5.4.1-3 of TS 36.211 as reproduced below (in Figure 22 and Figure 23). [5]
Sequence index |
Orthogonal sequences |
0 |
|
1 |
|
2 |
|
Figure 22 - Reproduction of TS 36.311 Table 5.4.1-2 –
Orthogonal sequences for NSFPUCCH=4
Sequence index |
Orthogonal sequences |
0 |
|
1 |
|
2 |
|
Figure 23 - Reproduction of TS 36.311 Table 5.4.1-3 –
Orthogonal sequences for
252. The overall 2D spreading structure within a slot of a non-shortened PUCCH format 1x signal (for normal cyclic prefix) is shown in Figure 24 below.
Figure 24 - 2D spreading for non-shortened PUCCH formats 1, 1a and 1b
253. The number of UEs that may share a PRB is a function of the overall degree of 2D spreading that is applied. For example, the 2D spreading for a 3-symbol DMRS region is of degree 12 × 3 = 36, while for a 4-symbol d(0) region it is equal to 12 × 4 = 48. As the degree of spreading is lower for the DMRS region, this becomes the limiting factor and therefore the maximum number of PUCCH format 1x transmissions that may be multiplexed together within the same PRB is 36.
254. The eNB may configure the UE with sr-PUCCH resources by means of RRC signalling, in which a “SchedulingRequestConfig” information element (IE) is sent to the UE.
255. When the SchedulingRequestConfig IE is used to configure (i.e. setup) resources for sr-PUCCH it conveys three fields that respectively define three parameters: , ISR and dsr-Transmax. These, in turn, define various attributes relating to the sr-PUCCH resource that is allocated and the PUCCH format 1x signals that will be sent on this allocated resource.
256. The first of these parameters, ‘n(1)PUCCH,SRI’, is a resource index (as mentioned above), which is an integer value between 0 and 2047. The eNB may assign different resource indexes to different UEs. Section 5.4.1 of TS 36.211 states that: “Resources used for transmission of PUCCH format 1, 1a and 1b are identified by a resource index from which the orthogonal sequence index and the cyclic shift are determined”. Mr Anderson commented that the underlying mathematics used to derive these terms from resource index n(1)PUCCH,SRI is reasonably complex, but he considered it was sufficient to understand that the resource index translates (as specified in TS 36.211) into (i) which PRBs the corresponding PUCCH format transmission occupies; (ii) which cyclic shifts are applied; and (iii) (for format 1/1a/1b signals) which time-domain orthogonal sequences are to be used. In other words, (ii) and (iii) are the time/frequency domain spreading sequences employed when constructing the PUCCH format 1x signals that are sent on the resource. These points are illustrated in Figure 25 below.
Figure 25 - Mapping of sr-PUCCH resource index
257. The same underlying principle applies for logical resource indices (n(2)PUCCH) into the PUCCH type 2 resource space, although because time-domain orthogonal sequences are not used for PUCCH format 2, 2a and 2b transmissions, the logical resource index defines only the cyclic shifts and PRBs that are to be used.
258. The second parameter, ISR, is an integer value between 0 and 155 that is used to identify a periodic pattern of subframe instances on which the sr-PUCCH resource may be used by the UE. Therefore, the value of ISR defines both the periodicity (5, 10, 20, 40 or 80ms) and the offset within the period when sr-PUCCH resources have been allocated to the UE. Figure 26 below illustrates an example of the resulting time domain patterns for the first few values of ISR:
Figure 26 – Illustration of periodic scheduling request transmission instances
259. The third parameter within the SchedulingRequestConfig IE is dsr-Transmax, which sets the maximum number of SR transmissions. As explained above, section 5.4.4 of the MAC specification (TS 36.321) specifies that once an SR has been triggered, the UE will continue to transmit a PUCCH format 1x signal on the configured instances of the periodic resource until the SR is cancelled. This cancellation occurs either when UL-SCH resources are provided or when the maximum number of transmissions, set by the dsr-Transmax parameter, is reached. This can take the value of 4, 8, 16, 32 or 64.
260. Once a UE has determined it requires uplink resources and signals SR to the eNB, the eNB allocates resources in the following way.
261. Schedulers located in the MAC of the eNB are responsible for allocating radio resources to UEs. Scheduling of uplink radio resources (for the UL-SCH) is performed through the use of dynamic uplink grants which are sent to the UE on a physical downlink control channel (PDCCH). Typically, a grant is provided for one Transmission Time Interval (TTI) lasting 1ms. An uplink grant message allows the eNB to specify information such as which physical resource blocks (PRBs) within the system bandwidth have been assigned, the modulation and coding scheme that should be used, a power control command to apply and so forth. Information regarding to which UE the assignment is addressed is also carried in the same message (a Downlink Control Information (DCI) format 0).
262. Mr Anderson explained that the operation of the scheduler will be proprietary to the eNB manufacturer, but the decision as to which UEs to allocate resources to will inevitably be based on at least information received by the eNB that an SR has been triggered at the UE, as well as other information.
263. As explained above, the triggering of an SR results in either the transmission of a PUCCH format 1x signal by the UE using its allocated sr-PUCCH resource (if sr-PUCCH resource has been configured and remains valid), or the initiation of a random access procedure (otherwise). For the former case, in response to receipt of a PUCCH format 1x signal on a UE’s sr-PUCCH resource, the eNB may then allocate UL-SCH resources to that UE via a DCI format 0 transmission on PDCCH.
265. The eNB decodes the BSR and is thereby informed of more detailed information regarding the quantity and priority of the data that is pending in the UE’s transmission buffers. The eNB uses this information as an input to its uplink scheduling process and may then provide the UE with further grants of uplink resources on which the UE may send the pending data. Whilst this process is ongoing, the BSR reporting procedures defined in TS 36.321 may generate further BSR MAC control elements to update the eNB regarding changes in the UE’s buffer status (e.g. as further data arrives and is queued in the UE’s buffer(s) for transmission). These BSR MAC control elements are sent on UL-SCH resources that are granted by the eNB primarily for the purposes of the user data transmission. However, such resources will cease to be allocated once the eNB understands the UE’s needs to have been satisfied (that is, the UE’s buffers are empty). After this point, the arrival of any further uplink data for transmission by the UE will result in the triggering of a regular BSR, although because there are no UL-SCH resources available on which to send the corresponding MAC control element, an SR is then triggered at the UE to re-initiate the allocation of resources.
266. The uplink scheduler in the eNB may use a number of different inputs to aid in its decision making as to which UEs to schedule, the resources to allocate to them and the format that the transmissions should take (such as which modulation and coding scheme to use).
267. In addition to information regarding the buffer status of the UE, these inputs may comprise information regarding the radio (or channel) conditions that the UE is currently experiencing, such as what the propagation channel or impulse response ‘looks like’, how much interference is present, how much power headroom is available at the UE and so forth. The way in which this information is used by the eNB is dependent upon the proprietary scheme that is implemented, however, possibilities include adaptive modulation and coding and channel dependent scheduling as I have previously described.
268. Channel-related information that may be used by the scheduler may be obtained from a number of different sources. Some of this may originate from the eNB’s analysis of uplink reference signals transmitted by the UE, whereas some may not.
269. It was not in dispute but in any event I find that a mobile signals its need for uplink resources using the SR procedure. That involves the UE signalling SR (or not) on the sr-PUCCH resource to indicate its need (or not) for uplink resources. However, the disputes about essentiality are really about whether signalling SR on the sr-PUCCH resource involves “transferring a pilot symbol” (in claim 1 of 259) and/or whether the information transferred that indicates the need (or not) is “based on pilot symbols provided by the base station” (in claim 1 of 689). In order to address those and the other arguments made by the Defendants, I now need to describe how the individual symbols in a PUCCH format 1x signal are constructed.
270. I have referred to the logical resource indices above. It is important to note that the logical resource index identifies not only the PRBs and codes for the non-DMRS symbols of a PUCCH sub-frame, but also those for the DMRS symbols of the sub-frame. Deriving which PRBs to use is straightforward as the DMRS symbols are always transmitted in the same PRBs as the non-DMRS symbols. For PUCCH format 1, 1a and 1b signals and for normal cyclic prefix, the cyclic shifts and the time domain spreading index are also derived in the same way for both the non-DMRS symbols and the DMRS symbols (although for the DMRS symbols, this time domain spreading index is denoted in the standard as rather than ). Mr Anderson explained that this can be seen by comparing the formulae in TS 36.211 for the related parameters in section 5.4.1 (for non-DMRS symbols) and section 5.5.2.2.1 (for DMRS symbols), and he was not challenged on that. If the number of SC-FDMA symbols for the non-DMRS and DMRS regions of a PUCCH sub-frame differ, the time domain spreading index (with value 0, 1 or 2) is used to identify two sequences (w and ) with each having the correct length for the corresponding region (non-DMRS and DMRS respectively). However, the index itself remains common to both.
271. In section 5.4.1 of TS 36.211, an index n’(ns) is associated with the (time and frequency domain) codes that are to be used in a PRB and this can be thought of as a ‘channel number’ (that applies to both the DMRS and non-DMRS regions) within the PRB. As mentioned in paragraph 253 above, a maximum of 36 PUCCH format 1, 1a or 1b signals may be multiplexed within the same PRB, and in such a case, n’(ns) would be an integer between 0 and 35. A logical PUCCH resource index (such as n(1)PUCCH,SRI) is associated with a pair of n’(ns) values, one for the even slot number (ns) of the sub-frame, and one for the odd slot number. This is known as resource ‘re-mapping’ and is used to randomize interference.
272. Mr Anderson provided an illustration, in Figure 27 below, of the structure and composition of a PUCCH format 1 signal (in one slot of its sub-frame) for both the DMRS region and the non-DMRS region.
Figure 27 – Construction of PUCCH Format 1 (showing one slot of a sub-frame)
273. As I described in paragraph 249 above, a common length-12 base sequence () is used to create each SC-FDMA symbol (irrespective of whether the symbol lies in the DMRS or non-DMRS region). The Figure shows that (for each SC-FDMA symbol) the base sequence is then modified in the following ways:
i. First, it is multiplied by a modulation symbol value. This is d(0) for the non-DMRS region, and is z(m) for the DMRS region. d(0) is equal to 1 for PUCCH format 1. z(m) is also equal to 1. I note in passing that z(m) is also equal to 1 for format 1a/1b signals, but that d(0) can have values other than 1.
ii. Secondly, a cyclic shift is applied. (In the implementation shown, the signal is still in the frequency domain at this point. Hence the time-domain cyclic shift that is desired is actually imparted by means of applying an equivalent ‘phase ramp’ in the frequency domain). A ‘base’ cyclic shift index (denoted ‘cs’ in the figure) is associated with the PUCCH signal within the slot (based on its logical resource index), but a permutation pattern (varying from symbol to symbol) is then applied to create a series of actual cyclic shifts to use for each SC-FDMA symbol as shown by the simplified notation (, , , … etc.). The permutation pattern continues across the sub-frame irrespective of whether the SC-FDMA symbol is in the non-DMRS or the DMRS region. This does mean that the cyclic shift used for a particular DMRS symbol may differ from that of a particular non-DMRS symbol. However, the same is true in general for all SC-FDMA symbols of the sub-frame, irrespective of whether they are designated as DMRS or non-DMRS (that is, there is only one permutation pattern that runs across the whole sub-frame).
iii. Thirdly, a value from a time-domain orthogonal sequence is applied. For the non-DMRS region, the values are shown as (w0, w1, w2, w3) whereas for the DMRS region, the values are shown as (0, 1, 2). These two sequences share a common time-domain sequence index (which is in turn determined by the logical PUCCH resource index). The actual sequences differ only due to the lengths of the respective regions (typically 3 for the DMRS and 4 for the d(0) region). As Mr Anderson explained, when the lengths are the same (which they are in a particular instance), the sequences are the same.
iv. Fourthly, a scrambling value (S) is applied (for the non-DMRS region only). This is equal to either 1 or the complex value j depending on the channel number n’(ns) that is in use within the slot (as explained above). The scrambling value S is 1 for even channel numbers and is j for odd channel numbers. For the DMRS region, no scrambling is performed, which is equivalent to multiplication by 1. Although Mr Bishop drew attention to this difference, he did not explain why it was relevant and Mr Anderson did not see it as altering the fundamental way in which the SR is communicated. I agree.
274. Once the above steps have been performed, an IFFT operation is used to create the time-domain signal for each SC-FDMA symbol. A cyclic prefix (CP) is then created and prepended to each.
275. Most of the different UCI types that are carried by PUCCH transmissions (including HARQ-ACK) are conveyed using ‘coherent’ modulation. To do so, modulation symbols (d(0), d(1), …, d(10)) are formed using BPSK or QPSK phase modulation, and these adjust the phase of the signals that are transmitted on particular SC-FDMA symbols of the sub-frame (as shown in Figure 18 above). The phase of each modulation symbol is coherently detected at the receiver using the DMRS as a phase reference.
277. Mr Anderson explained (and Mr Bishop accepted in cross-examination) that the way in which SR is communicated differs from the other UCI types in that the information is not carried by modulating the phase of symbols. Instead, the information is carried by the presence (vs. absence) of the signal (regardless of whether the signal is a PUCCH format 1, 1a or 1b signal) on the sr-PUCCH resource. As Mr Anderson explained, this is sometimes referred to as non-coherent modulation or ‘on-off keying’ of the signal.
278. Although the symbols in the d(0) region of PUCCH format 1x signals are defined in separate sections of the standard to the DMRS, as Mr Anderson’s figure illustrates (see Figure 27 above), much of the processing is common.
279. Although much of what I have already set out is derived from the TS36.211 standard, I now need to set out the parts on which the Defendants place particular reliance.
280. Section 5 is concerned with the uplink and starts in section 5.1 with an overview which states: ‘The smallest resource unit for uplink transmissions is denoted a resource element and is defined in section 5.2.2.’
281. Section 5.1.1, headed Physical Channels, specifies that three uplink physical channels are defined: PUSCH, PUCCH and PRACH. It states: ‘An uplink physical channel corresponds to a set of resource elements carrying information originating from higher layers…’
282. Section 5.1.2, headed Physical Signals, states: ‘An uplink physical signal is used by the physical layer but does not carry information originating from higher layers. The following uplink physical signals are defined: - Reference signal.’ (emphasis added).
283. The PUCCH is specified in section 5.4, which states: ’The physical uplink control channel, PUCCH, carries uplink control information.’ and ‘The physical uplink control channel supports multiple formats as shown in Table 5.4-1’ and that table lists the six PUCCH formats. PUCCH formats 1/1a/1b are dealt with in section 5.4.1, where the details of the processing are set out. That section also begins with the statement ‘For PUCCH format 1, information is carried by the presence/absence of transmission of PUCCH from the UE. In the remainder of this section, d(0) = 1 shall be assumed for PUCCH format 1.’
284. The Defendants submit that section 5.4.1 explains what a format 1/1a/1b signal is, how it is constructed and it does not include the DMRS, all of which is true. Section 5.4.3 specifies that PUCCH format 1/1a/1b is mapped to resource elements ‘not used for transmission of reference signals’.
285. The reference signals (DMRS and SRS) are dealt with in section 5.5. It specifies that the DMRS is ‘associated with the transmission of PUSCH or PUCCH’. The generation of the DMRS for PUCCH is specified in section 5.5.2.2.1 and the mapping to physical resources is in section 5.5.2.2.2. The Defendants’ point here is that if DMRS was part of PUCCH, these sections would be in 5.4 but they are not. Section 5.5.2.2.2 says that DMRS is mapped to the same frequencies ‘as for the corresponding PUCCH transmission’.
286. I have already summarised the way in which the DMRS and d(0) region symbols are created and I do not need to explain the additional detail set out in the standard because what the Defendants are getting at is (a) they are defined in the standard separately and (b) they require different processing for their generation (and the latter point is no doubt the reason why they are dealt with in separate (sub-)sections). The Defendants’ point is illustrated by comparing two diagrams reproduced by Mr Bishop from a textbook referred to as Johnson (Long Term Evolution in Bullets (2nd Edition, 2012) by Chris Johnson). Each diagram illustrates the processing specified in the standard. First, the d(0) region:
288. There is no dispute between the experts on this. It can be seen that Mr Anderson’s figure (Figure 27 above) represents the combination of those two diagrams, at least so far as a PUCCH format 1 signal is concerned.
289. I can now turn to address the various points made by the Defendants, by drawing together various threads from the technical detail. I have already dealt with the sixth and seventh points. The tenth point falls away in the light of my conclusions below. I address the other points in turn, although it will be noted that much of the ninth point (concerning coherent modulation) is resolved as part of my consideration of the second point.
290. Logically, the first issue is that the Defendants say that none of the pilot signals relied on by the Claimants is the right sort of pilot. It is common ground that none of the Claimants’ candidates is a channel quality pilot. In LTE, the channel quality pilot is the SRS. The SRS is not used by the UE to signal an SR and the Claimants do not allege that the SRS falls within the claims. This point does not avail the Defendants, because I have decided the construction issue against them.
291. The second and perhaps the main issue is that the Defendants say the DMRS is not ‘part of’ the PUCCH format 1/1a/1b signal. The Defendants make a number of points in support of their position:
i) First, they say that the PUCCH is a physical channel and the DMRS is a separate physical signal which accompanies it. However, this argument is nothing more than misdirection which confuses the PUCCH as a channel and the signal(s) in question, which are PUCCH format 1/1a/1b signals sent on that channel.
ii) Second, the Defendants submit that it is clear from TS36.211 that, as they put it, PUCCH and DMRS are separate (but associated) signals. This submission relies on the passages from the standard which I have set out above.
292. The Defendants further submitted as follows:
i) Mr Anderson accepted that the PUCCH formats are defined by the standard as separate signals;
ii) That the standard provides that signalling of an SR is done by (and only by) format 1/1a/1b, by which it means the four PUCCH data symbols which constitute a format 1/1a/1b (“the d(0) region”)
iii) That the point is especially clear in relation to PUCCH format 1 because TS36.211 section 5.4.1 says that “For PUCCH format 1, information is carried by the presence/absence of transmission of PUCCH from the UE”. They say that is within the section defining PUCCH (section 5.4) and there is no doubt that “PUCCH” in that context means PUCCH as described in that section and not the reference signals such as DMRS.
iv) Accordingly, the Defendants invite the conclusion that the signalling of SR is done by the four d(0) symbols and not by the DMRS.
293. Leaving aside the point that I have already dealt with that the Defendants confuse the PUCCH as a channel with the actual PUCCH format 1x signals, there is no doubt that somewhat different processing is required to generate the d(0) symbols as opposed to the DMRS symbols and the standard draws the distinctions which the Defendants seek to emphasise. However, it seems to me that the Defendants’ analysis is directed at the wrong level of generality but also places far too much emphasis on general statements in the standard. Although these general statements are apposite in context, they cannot be taken too literally.
294. In order to identify the correct level of generality, the starting point lies in a different standards document, TS 36.321, which specifies in section 5.4.4 that ‘the Scheduling Request (SR) is used for requesting UL-SCH resources for new transmission’. As explained above, section 10.1 of TS 36.213 specifies that
"The following combinations of uplink control information on PUCCH are supported:
- Scheduling request (SR) using PUCCH format 1
- HARQ-ACK and SR using PUCCH format la or 1b"
295. It is clear, as I stated at paragraph 227 above, that it is the presence or absence of a PUCCH format 1x signal on the assigned sr-PUCCH resource which indicates a positive or negative SR.
296. Accordingly, the Defendants’ splitting of the PUCCH format 1x signals into two separate signals is incorrect, for the following reasons:
i) First, the simple answer is the standard specifies that a particular form of signal is sent to request uplink resources. It is either a PUCCH format 1 signal for SR alone or a PUCCH format 1a/1b for (SR + HARQ-ACK). Those are the signals which are required to be sent in order to signal SR. In either case, it is abundantly clear that the signal comprises both the d(0) regions and the DMRS. It is therefore, in my view, artificial and wrong to isolate one part of the signal (just the d(0) region) from its other constituent parts, as the Defendants seek to do.
ii) I also observe that there is a distinct air of unreality about the Defendants’ argument when they insist that SR is indicated only by the d(0) region and not by the DMRS, when both are and have to be transmitted in the same slot of a sub-frame, and when the standard requires a signal to be sent which must comprise both.
iii) To put the same point a different way, the Defendants’ argument is at the wrong level of generality. The correct level of generality is indicated by the standard when it requires SR to be signalled by a PUCCH format 1x signal.
iv) Second, as Mr Bishop ultimately accepted in cross-examination, the information indicating the need for resources is conveyed by all 7 symbols of the PUCCH format 1/1a/1b signals, and not just the outer four symbols of the d(0) region.
297. The final point I have to address is the differences in the processing to create the symbols in the d(0) region and the DMRS region. On this point, the Defendants sought to emphasise a distinction between PUCCH format 1 signals and format 1a/1b signals. So I will approach them separately.
298. So far as the PUCCH format 1 signal is concerned, the Defendants submitted that the Claimants liked to focus on that and I quote: ‘because the symbols of this message might look a little bit like a pilot signal to the untrained eye, in that they are modulated with d(0)=1.’
299. So far as the PUCCH format 1a/1b signals are concerned, the Defendants submitted that ‘the symbols of a PUCCH format 1a and 1b are modulated by the UE, unknown to the receiver, and cannot be considered a pilot on any view - unless all signals are pilot signals.’ They also submitted that Mr Anderson’s figure which I have reproduced above as Figure 27 ‘is liable to mislead if one is not very careful, because it conceals the variable modulation of the PUCCH symbols.’
a) ‘The data symbols are still modulated, with d(0)=1.
b) Further, the data symbols of a PUCCH format 1 message cannot be pilot symbols on any view, because they are not and cannot be used as a phase reference (i.e. as a demodulation pilot) or for any other pilot purpose (save to the extent that the received power of any signal can be measured).’
301. Turning first to the PUCCH format 1 signal, the following points are clear:
i) First, the creation of all the symbols starts from the same known base 12 sequence.
ii) Second, the cyclic shifts, orthogonal sequences and scrambling values (where applied) are all known and are specified by the base station to the mobile in the sr-PUCCH resource index by means of RRC signalling in which the SchedulingRequestConfig IE is sent to the mobile by the base station.
iii) Third, accordingly, the PUCCH format 1 signals (and each symbol therein) are pre-defined signals known to both transmitter and receiver.
iv) Fourth, both the DMRS and d(0) regions have ideal autocorrelation properties, making them suitable for estimating signal power and a carrier to interference ratio (C/I).
v) Accordingly, the whole of a PUCCH format 1 signal (and each symbol therein) is capable of being used to perform typical pilot functions.
vi) In any event, as the standard specifies, as Mr Anderson explained and as I found above, SR is signalled (or not) by the presence (or absence) of a PUCCH format 1 signal on the sr-PUCCH resource.
302. Accordingly, I find that signalling SR by a PUCCH format 1 signal on the sr-PUCCH resource satisfies integer C of EP259. Equally, the presence or absence of a PUCCH format 1 signal on that resource indicates a need or no need for uplink resources, and so integer B of EP689 is also satisfied.
303. So far as the PUCCH format 1a/1b signals are concerned, they are constructed in the same way as the format 1 signal. Specifically, the same processing is used to create the DMRS symbols (see Figure 157 from Johnson, reproduced at paragraph 287 above). There is however one material difference between a PUCCH format 1 signal and a PUCCH format 1a/1b signal. As explained in paragraph 239 above, for PUCCH formats 1a and 1b, the d(0) field is used to carry 1 or 2 bits (respectively) of HARQ Acknowledgement information so d(0) can have the values set out in paragraph 239 above. As Mr Anderson explained in his first report, this modulation of the d(0) symbols by HARQ ACK/NACK information means that those symbols are not suitable for determining a phase reference for demodulation.
304. This may be thought to create a potential problem for the Claimants. Mr Anderson addressed this in these two paragraphs in his reply report (which I include with his footnotes):
‘57. The transmission of a PUCCH format 1, 1a or 1b sub-frame always includes the DMRS region and the non-DMRS region [6], therefore either both are transmitted, or both are not. As such, the same non-coherent modulation is applied to both the DMRS and the non-DMRS regions (both ‘on’ or both ‘off’). The use of non-coherent modulation for the SR information is also why a PUCCH format 1a or 1b signal on the sr-PUCCH resource indicates a positive SR, in effectively the same way as a PUCCH format 1 signal on the same; the d(0) region conveys the ACK/NACK information via coherent modulation (using the DMRS as a phase reference), but as all SC-FDMA symbols of the sub-frame carry energy [7], they all convey the SR information.
58. Before detecting SR (and also any ACK/NACK information if expected), the receiver demultiplexes the signals from multiple UEs sharing the same PRB using knowledge of the 2D spreading sequences that are associated with each [8] (see also paragraphs 290 - 291 of my First Report). Both the DMRS region and the non-DMRS region can be demultiplexed in this way, as both use a 2D spreading structure.’ (The content of paragraphs 290-291 of Mr Anderson’s First Report is at paragraphs 245-247 above).
305. In these paragraphs, Mr Anderson accepted that the d(0) region conveys the ACK/NACK information via coherent modulation, and is coherently demodulated using the DMRS as a reference. He suggests that, as all SC-FDMA symbols of the sub-frame carry energy, they all convey SR. Hence, ‘The use of non-coherent modulation for the SR information is also why a PUCCH format 1a or 1b signal on the sr-PUCCH resource indicates a positive SR, in effectively the same way as a PUCCH format 1 signal on the same [resource];’ (emphasis added).
306. In this regard, his footnote (numbered 8 above) is important. The phase of the d(0) symbols is affected by modulation and transmission. As explained at paragraphs 275-276 above (addressing all the possible modulation symbols d(0) to d(10)) the phase of each such modulation symbol (where present) is coherently detected at the receiver using the DMRS as a phase reference. However, it is the relative phase of the received modulation symbol which is important because it is used to determine the information which is carried. This matters for those UCI types which are conveyed using coherent modulation (e.g. HACK-ACK, CQI). However, SR is not conveyed by modulating the phase of modulation symbols but, as I have already mentioned, by the presence (or absence) of the signal on the sr-PUCCH resource.
307. This potential difficulty explains why the Claimants and Mr Anderson (in his reply report) were keen to draw attention to the structure of the PUCCH format 2a/2b signals, even though they do not involve the signalling of SR. They focussed on the d(10) symbol in the PUCCH format 2a/2b signals as being an example where, as he put it by reference to Figure 18 I reproduced at paragraph 242 above:
‘48. ……..the modulation symbols are usually carried by non-DMRS SC-FDMA symbols of the sub-frame. However, the principle of mapping modulation symbols to only non-DMRS symbols is not always adhered to, as is evident for example in PUCCH formats 2a and 2b, which allow simultaneous transmission of both CQI and HARQ-ACK information. For these formats, the first 10 modulation symbols, d(0), … d(9), carry the CQI whilst the 11th modulation symbol, d(10), carries the HARQ-ACK. The CQI information is phase modulated and is mapped to the set of non-DMRS symbols. However, the HARQ-ACK information is instead used to phase-modulate the second DMRS symbol of each slot (whilst the first DMRS symbol of each slot remains unmodulated). In this way, the ACK/NACK information is coherently signalled via the second DMRS symbol and the overall information content is therefore carried via a mix of SC-FDMA symbol types (i.e. those designated as ‘non-DMRS’ and those designated as ‘DMRS’).’
308. Mr Bishop was asked about the status of the d(10) symbol in the PUCCH format 2a/2b signals (by reference to Figure 18 above) in cross-examination (T8, pp1026-1027):
24 Q. I do not think you have answered my question. My
25 understanding is your evidence is that when this signal, the
2 format 2a or 2b signal, arrives at the receiver, the d(10)
3 symbol 5, when it arrives it is not a reference symbol, but it
4 becomes one. Is that your evidence?
5 A. Well, it arrives in the reference symbol slot, but it cannot
6 be used as a reference symbol. Once modulation has been
7 removed, it can be used as a reference symbol. If it is
8 jointly decoded, then I think we talk about estimating or
9 using an estimate for data symbols or the data as well as the
10 reference symbol in order to jointly decode. So, effectively,
11 it is not a reference symbol, or cannot be used as a reference
12 symbol, all the time that it is modulated.
309. This answer is revealing. It is clearly implicit that Mr Bishop accepted that the d(10) symbol was created as a reference signal. His point was that it was then coherently modulated to carry HARQ-ACK information (in each slot). I interpolate that PUCCH format 2a or 2b signal would then be multiplexed and transmitted. On arrival at the base station, the signal would be de-multiplexed and HARQ-ACK would be detected through the coherent demodulation of the d(10) (DMRS) symbols, whilst the CQI information would be extracted from the other ten modulation symbols (d(0) to d(9)). Mr Bishop’s other point was when the symbol can (or cannot) be used as a reference signal, but that is beside the point. Neither of the patents require the receiver to use the pilot symbols as reference signals.
310. Reverting to the PUCCH format 1x signals, Mr Bishop plainly accepted (more than once) that SR is not coherently modulated but is signalled by a non-coherent on-off keying scheme (T8, p1082, lines 13-17, by way of example), in other words by the mere presence (or absence) of the signal. He also accepted, in relation to a PUCCH format 1a or 1b signal on the SR resource, that SR is signalled by the presence or absence of that signal and coherent demodulation is only needed to detect the HARQ-ACK information (T8, p1064 lines 21-25). This answer was consistent with Mr Anderson’s evidence in his reply report at paragraph 58 (quoted above). Drawing all these threads together, as regards the PUCCH format 1a and 1b signals, I make the following findings:
i) As explained above, both the DMRS and the d(0) symbols are constructed as reference signals;
ii) The d(0) symbols are then coherently modulated to carry HARQ-ACK information;
iii) The entire PUCCH format 1a or 1b signal is multiplexed with other signals and transmitted (here I refer to multiplexing and demultiplexing being general references to all the processing required to send the signal and then extract it at the base station);
iv) On receipt at the base station, the receiver demultiplexes the signals from multiple UEs sharing the same PRB using knowledge of the spreading sequences associated with each UE.
v) SR is detected (or not) from the presence (or absence) of the PUCCH format 1a/1b signal itself. As Mr Anderson put it, all SC-FDMA symbols of the sub-frame carry energy, the power or energy of a d(0) symbol is not affected by its phase, so all 7 symbols convey the SR information.
vi) The ACK/NACK information is extracted from the d(0) symbols by coherent demodulation using the DMRS as a phase reference.
311. The evidence did not make it explicitly clear the order in which SR and ACK/NACK were detected. I positioned them above in the order which seems to me to be the logical order, but since each is detected in the way described above, it would not seem to matter. The point here is that additional processing of a reference signal (whether it be multiplexing or the addition of information by coherent modulation) does not affect the fact that the d(0) symbols were created with the characteristics of reference signals. Equally the fact that such symbols cannot be used as reference signals until coherently demodulated is beside the point.
312. By way of a cross-check, I note two points about Mr Anderson’s evidence.
313. First, as the Defendants pointed out, when he was considering PUCCH format 1x signals, in his First Report at paragraph 326 he stated very clearly that he considered the DMRS region to be a pilot signal. When he came to the d(0) region in his paragraph 327, he was more tentative but this was clearly because he was conscious of the d(0) region being modulated by HARQ-ACK information in the format 1a and 1b signals. In the circumstances, this was entirely understandable precisely because, in my view, Mr Anderson was recognising that ultimately this was a question for the Court to decide. He was being careful to provide a full explanation, which he did at this point and he expanded on it in his reply report.
314. Second, I have asked myself whether Mr Anderson’s reliance on detection of the energy in the SC-FDMA symbols in the PUCCH format 1x symbols is in some way concealing that SR is not being signalled by a pilot or reference signal(s). In this regard, it might be said that conveying SR by the transmission of energy is not the same as signalling SR by a pilot or reference signal. However, such a contention confuses what is constructed and sent by the UE with the way in which SR is detected at the receiver. So I am satisfied that this question is answered in the negative.
316. Accordingly, I find that signalling SR by a PUCCH format 1a or 1b signal on the sr-PUCCH resource satisfies integer C of EP259. Equally, the presence or absence of a PUCCH format 1a or 1b signal on that resource indicates a need or no need for uplink resources, and so integer B of EP689 is also satisfied.
317. Finally, I observe that this part of the case was not particularly well-explored at trial, perhaps because the Defendants recognised the force of the case based on the PUCCH format 1 signal alone, but were aiming at trying to infect all PUCCH format 1x signals with their main argument that the DMRS and d(0) regions were separate signals. However, in the result, I have reached the clear conclusions stated above.
318. My conclusions do raise the question of whether the pilot symbols in claim 1 of 259 must do the job of requesting uplink resources alone and unaccompanied by any other data or signals (e.g. the HARQ-ACK information in PUCCH format 1a/1b signals when signalling SR). However, I see no reason why an accompanied pilot symbol does not fall within claim 1 of 259 if it does the job of indicating a need for uplink resources. In this regard, Mr Bishop gave an interesting answer in cross-examination (Day 8, p1025) when being asked about the d(10) symbol being modulated to carry HARQ-ACK information in a PUCCH format 2a/2b signal:
‘Q. But that is a symbol that is clearly being used, first, can be used and probably maybe is used in many implementations, both as a reference symbol and to carry information?
A. As I said earlier, 3GPP liked to re-use signals if they can to avoid having to transmit additional signals, and what they have done here is that they have re-used the base sequence that was being used for reference purposes in order to carry a modulated signal.’
319. Mr Bishop’s earlier reference was when he was being asked about the patent(s) (Day 8, p997) where he said:
‘A. I think the patent is proposing to re-use an existing signal, but it happens to be a pilot signal. 3GPP and standards organisations all over the world working on radio systems are very interested in efficiency and minimising the amount of signalling that takes place. So if it is possible to re-use an existing signal for an additional purpose, then they will generally try to pursue that if it is technically the best solution or technically an acceptable solution.’
320. It does not matter whether the tendency to re-use signals was CGK or not. Mr Bishop’s answers were a clear confirmation of the fact that where a symbol is created with the characteristics of a pilot signal, it does not lose those characteristics if it is then modulated to convey additional information, even if, at the receiver, that symbol must be coherently demodulated to enable SR to be detected.
321. This is a thoroughly bad point made by the Defendants. It is true that DMRS are sent in other signals, but the point is that the Claimants’ case does not comprise every DMRS but only those sent in PUCCH format 1x signals. The fact that DMRS are also sent in other signals does not affect that case at all. Furthermore, the DMRS sent in other signals are created according to different resource indices.
322. Mr Anderson addressed this point by saying that the DMRS may be sent at different powers in each sub-frame (in accordance with the PUCCH power control scheme that is employed) and it is correct that this is not used to communicate quality of service information. However, this is not a requirement of either of the claims 1 and furthermore, this does not have a bearing on the way in which SR information is communicated.
323. This is hypothetical because the DMRS is not sent alone. The only DMRS that matters are the ones which are included in a PUCCH format 1x signal.
324. In their Opening Skeleton at §4.74, the Defendants accepted that the DMRS are provided by the base station in the sense required by claim 1 of 689 but they assert that the d(0) symbols are not. The Defendants’ point on the d(0) symbols is that the UE has to decide whether it is sending an SR only or SR with ACK/NACK as well and if ACK/NACK, whether it sends ACK or NACK. The sending of ACK/NACK information requires the d(0) symbol(s) to be modulated and this appears to be the Defendants’ point.
325. As the Claimants pointed out in their closing, this was not a point raised in Mr Bishop’s evidence, nor was it a point put to Mr Anderson. There is, however, nothing in this point. Modulation simply adjusts the phase of the d(0) symbols. The SR information is still ‘based on pilot symbols provided by the base station’. Claim 1 of 689 does not require the phase of the signals that are sent to convey SR to be provided by the base station.
326. In any event, as indicated above, the processing to create both the DMRS and the d(0) regions in a PUCCH format 1x signal starts with the same base 12 sequence. The modulation symbols applied are as defined in the standard, as are the cyclic shifts, the orthogonal sequences and scrambling (if applied). Furthermore, the sr-PUCCH resource index specifies in which PRBs the corresponding PUCCH format signal is transmitted, the spreading sequences in the time domain and frequency domains, and the cyclic shifts. The sr-PUCCH resource index is sent to the UE in RRC signalling from the eNodeB in the SchedulingRequestConfig Information Element. Furthermore, as explained above, in my view it is clear that the SchedulingRequestConfig IE conveys indicia which allow the UE to identify its unique identifier and to use that in its signalling of SR. Hence, it is clear that both the DMRS and the symbols in the d(0) region are based on (i.e. representative of) signals provided by the base station.
327. Even if I assume I am wrong about the d(0) symbols, it remains the case that the PUCCH format 1x signals signalling SR are based on pilot symbols provided by the base station. For this purpose the DMRS symbols are sufficient.
328. To repeat what I said above, the argument is that if all that the reference signal does is coherent demodulation (which it can do because the reference signal is multiplexed with the data signals) such a reference signal does not convey any information (such as a need for uplink resources). As Mr Anderson put it in his reply report: ‘In such cases, the reference signals are not themselves part of the message; they are part of the means by which the message can be understood.’ At first sight, it may appear that there is a clear distinction between a reference signal which is sent and used for coherent demodulation of the message, and a reference signal which itself comprises or conveys the message.
329. However, as explained above, the distinction is more subtle. A reference signal can itself be coherently modulated to convey a particular message (such as ACK/NACK) but once coherently demodulated, the reference signal can convey another message (SR) simply by its presence (or absence).
330. As I understand matters, the Defendants developed their argument based on their understanding that Mr Anderson in his reply report raised the idea that ‘non-coherent modulation’ was a point of distinction between Kwon and LTE, as signified by the Claimants’ reliance on all the symbols in a PUCCH format 1x signal. It does not matter whether the Defendants are right or not in their assertion that this ‘non-coherent modulation’ point only became clear in Mr Anderson’s reply report. When the first round of expert reports were served, the Claimants’ case was that the PUCCH format 1/1a/1b signals comprised the pilot symbols of the claims.
331. Although the Defendants accept that, in LTE, a base station is able to determine the presence of an SR without demodulating the PUCCH symbols, they contended that was not an answer to their basic point that it is the d(0) symbols and not the DMRS which indicates a need for uplink resources. I have dealt with that basic point above.
332. The Defendants go on to assert that if this concept of non-coherent demodulation was enough to bring a message comprising a demodulation pilot and associated data symbols (i.e. a PUCCH format 1/1a/1b) within the claim, then the same is taught by Kwon too and specifically when a step (1) message is sent on a request channel. So I now turn to Kwon so I can consider the Defendants’ squeeze arguments.
333. Subject to those, I have found that the presence or absence of a PUCCH format 1x signal on the sr-PUCCH resource indicates a need or no need for uplink resources, and so integer B of EP689 is satisfied. For the same reason, integer C of EP259 is satisfied by the signalling of a PUCCH format 1x signal on the sr-PUCCH resource. So both EP259 and EP689, if valid, are essential to Release 8 of the LTE standard.
334. Kwon is a 3GPP Working Group 1 TDoc entitled ‘Uplink scheduling procedure’. It was submitted by Samsung to RAN WG1 meeting 43 held in Seoul in November 2005. The document is a proposal for an uplink scheduling procedure for LTE.
335. Section 1 (Introduction) explains that resource allocation and link adaptation in the LTE uplink will be much more challenging than in the downlink because (i) the scheduling will be done at the eNodeB while the traffic will be generated at the UE, and (ii) overhead is required to support channel dependent scheduling and link adaptation.
336. The introduction continues:
‘In this paper, we discuss some aspects related to scheduling procedure for SC-FDMA based uplink including:
· Scheduling request, e.g ., buffer status, QoS parameter, power headroom (?) etc.
· Resource allocation (scheduling), e.g., channel dependent/independent, dynamic/static, etc.
· Scheduling grant which may include UE ID, resource allocation information, MCS level, HARQ related information, etc.
· HARQ, synchronous/asynchronous IR, adaptive, etc.
· Possible way to transmitting reference signals for SINR measure to support channel dependent scheduling.
· Power control and link adaptation.’
337. Section 2 is headed ‘Uplink data channel operation overview’ and focusses on this figure 1:
338. Kwon explains this figure as follows (and I have inserted the relevant steps for clarity, but not corrected the language):
‘Figure 1 illustrates uplink data channel operation overview in EUTRA. UE sends a request message [1] including e.g., buffer occupancy, QoS parameter, etc. when new data arrive in its buffer. Once receiving the request, node B might allow [1-1] the UE to transmit [1-2] SINR measure pilots to support channel dependent scheduling [1-3]. The Node B scheduler determines [2] which UEs are to be allowed to transmit data and send [3] a grant message to the scheduled UE. The scheduling might be done based on C/I measured on the SINR measure pilots. The scheduled UE transmits [4] then data using the resources assigned. After decoding [5] the data channel, the Node B sends back [6] ACK/NACK for a proper HARQ operation.’
339. So Kwon discloses the sending of an explicit request for uplink resources - this is the step (1) message. Kwon also explicitly discloses the sending of pilot symbols by the mobile in step (1-2).
340. Section 3 of Kwon is entitled ‘Considerations on scheduling procedure’. It explains:
‘Taking into account the amount of overhead required for the uplink scheduled transmission, e.g ., buffer status report, reference signals for SNR measure, grant message, and so on, fully scheduled data transmissions where all the packet data transmissions are dynamically scheduled may not be a good approach in an orthogonal multiple access system, e.g., SC-FDMA. In addition to the dynamic resource allocation, a semi-static one could be also considered. Decisions on which methods are to be used would be dependent on the characteristics of the services, e.g., QoS requirement, statistics on packet generation, and so on.’
341. It goes on to discuss a service dependent scheduling procedure in order to minimise the overhead on both the uplink and downlink ‘so as to maximise the eventual system performance’. In section 3.1 it sets out three simple example service types in Table 1 (ranging from Type 1: mobile gaming which requires real-time and fixed sized packets issued at periodic intervals to Type 3: FTP or HTTP (i.e. web browsing) applications with delay-tolerant and variable sized packets) and then proposes in Table 2 a scheduling mechanism for each service type for request, resource allocation and grant operation:
342. So what was being proposed in Table 2 were possible ways of reducing the required overhead by employing a service-dependent scheduling mechanism.
343. The remainder of section 3 discusses HARQ (3.2) and (3.3) Channel-dependent scheduling, link adaptation and power control. It is clear from the conclusions in section 4 (and from the structure of the document overall) that the focus of Kwon is on his proposal for service-dependent scheduling where all the various functions in section 3 (request, resource allocation, grant, HARQ, AMC and power control procedure) vary according to different service types, in order to minimise the overhead on both uplink and downlink. It is clear that Section 2 and Figure 1 are presented as simply an overview of uplink data channel operation, providing the background against which the main proposals in section 3 are presented.
344. As the Claimants pointed out, the Defendants’ case on Kwon is not a conventional validity attack. Instead it is said that an obvious implementation of Kwon provides a squeeze on the essentiality case. Eventually, three issues emerged:
i) The first was based on the step (1) request message. It was common ground that this message is the means by which Kwon proposes that the mobile indicates a need for resources. It was also common ground that this message is not a pilot signal, but it would be sent on scheduled resources and would necessarily be accompanied by pilot symbols to support coherent demodulation of the data in the message (i.e. the buffer occupancy, QoS parameter, power headroom etc.).
ii) The second was based on the step (1-2) pilots. Mr Bishop said that whilst these signals were not sent with the express purpose of indicating a need for resources, they were ‘part of the process by which the mobile requests resources’.
iii) The third emerged for the first time in the cross-examination of Mr Anderson and is based on the reference to ‘a request channel’ for type 3 services in Table 2, but also involves the sending of the step (1) message on this ‘request channel’.
345. The Defendants’ argument was that, since the step (1) message would include or be accompanied by pilot symbols for coherent demodulation of the message, there was no material difference between those pilot symbols and the DMRS symbols in a PUCCH format 1x signal. However, it is clear that the pilot accompanying the step (1) message in Kwon does not indicate a need for resources. It is the data in the step (1) message which conveys the mobile’s need for resources. The accompanying pilot signal is there simply to enable the request itself to be decoded and understood. The pilot signal does not convey a need for resources. So this attempted squeeze fails.
346. The Defendants’ argument comprised the following steps:
i) First, that the experts agreed in the written evidence that the step (1-2) signals would be transmitted repeatedly and periodically until message (3) was received from the base station. Conversely, it was said that the step (1-2) message would not be sent when the mobile did not need an uplink resource.
ii) Second, that the obvious thing to do would be for the mobile to continue to send the step (1-2) signals even after receiving message (3), if there was more data in the buffer. The Defendants acknowledged this would be a modification to Kwon, but stated it was not essential to their obviousness case.
iii) Third, the Defendants introduced the notion of a higher layer interrupt (originating from higher layers) which would bring the need for uplink resources to an end. That would stop the sending of pilots.
iv) Fourth, the base station would not grant an uplink resource to a mobile whose signal (1-2) it did not detect, or at least it would be obvious to do that.
347. On those premises, the Defendants submitted that Kwon (or an obvious implementation of it) would work in exactly the same way as the example embodiment in the Patents (and I quote):
i) ‘When data arrives in the buffer, the mobile sends a message to the base station indicating that fact (message (1) of Kwon, step S500 in the patents).
ii) In response, the base station allocates a channel quality pilot to the mobile (message (1-1) of Kwon, steps S601/S501 in the patents);
iii) Once the pilot has been granted, the mobile starts sending it periodically (signal (1-2) of Kwon, step S507 in the patents).
iv) The mobile stops sending the pilot when it no longer has a need for a resource.
v) But the mobile keeps sending the pilot while it requires a resource.
vi) The base station only grants uplink resources to mobiles whose pilot signals it detects.
vii) The Defendants also submit that points iv) and v) mean that the mobile makes an ‘autonomous determination’ - this in response to a point made by the Claimants on essentiality.’
348. This squeeze fails for the obvious reason that the step (1-2) pilots do not indicate the mobile’s need or not for uplink resources. That has been indicated by the step (1) message. These pilots (when configured) are under the control of the base station and are used by the base station to assess channel quality for the purpose of allocating resources amongst those mobiles that have requested them. The point here is that until the base station has sent its scheduling grant at step (3), the base station ‘knows’ that the mobile needs resources from its receipt of the step (1) message. The step (1-1) message and (1-2) pilots go back and forth to assist the base station in its allocation of resources until the base station allocates resources to this particular mobile in step (3), but the pilots do not indicate the need for resources. Furthermore, as I have said, the step (1-1) and (1-2) process is driven by the base station, not by the UE.
349. As I indicated above, this argument emerged for the first time in the cross-examination of Mr Anderson. Although both experts covered section 3 of Kwon in their first reports, Mr Bishop did not mention the ‘request channel’ at all and Mr Anderson mentioned it only in passing for completeness. Neither expert referred to section 3 of Kwon in their reply report. Thus, the written evidence contained no discussion at all about what this ‘request channel’ was or how it would work and, aside from the reference to ‘a request channel’ in Table 2, Kwon contains no discussion or further explanation.
350. It was put to Mr Anderson that the skilled person would understand the words ‘a request channel’ to be teaching the use of a channel dedicated for the purpose of sending step (1) resource requests. He explained he found it difficult to read anything into those two words in Kwon in the absence of any explanation. When pushed, he said two possibilities were likely to occur to the skilled person: the use of allocated resources (without simultaneous data transmission) or use of a random access procedure (both familiar from his CGK):
25 Q. I know you say may not need to be allocated. I am not asking
2 you about this is the only way to do it. I am suggesting to
3 you that one way for a skilled person to implement this, which
4 would just be using the teaching of the document and their
5 CGK, would be to provide for a request channel on which they
6 could send message (1) requests using allocated resources.
7 That is just one ordinary, uninventive, unexciting,
8 unimaginative way to implement Kwon?
9 A. I do not know whether that would occur to the skilled person.
10 I think the skilled person would look at the request channel
11 and what they know there is that random access can be used to
12 request. That would perhaps be the norm in the case where you
13 do not have resources on which to send your message (1) by
14 either the piggybacking scheme or just where those resources
15 are available. Whether you do or do not have data, then you
16 could send the message (1) on the scheduled resources. So you
17 have scheduled resources and you have random access resources.
18 I think the first part of this sentence, in Table 2,
19 piggybacked on data transmission, would refer to the scheduled
20 resources and perhaps the latter refers to unscheduled. It is
21 not clear. I find it difficult to interpret what was intended
22 exactly by Kwon with request channel, because it is not
23 further described.
351. Mr Anderson’s suggestion of some form of random access procedure was consistent with the suggested use of synchronous random access for sending a scheduling request, which the experts agreed had been proposed in TR 25.912 and TR 25.814.
352. For his part, Mr Bishop mentioned the possibility of a dedicated request channel in his cross-examination, no doubt because he had heard the point raised in the cross-examination of Mr Anderson. He accepted however that he had not mentioned that possibility in his reports, nor mentioned the reference to "request channel" in Table 2, and that he had considered section 3 of Kwon to be no more than background information. He also agreed that the skilled person would not have given it any particular consideration:
10 Q. If we go back in your first report and have a look at what you
11 say about Kwon, maybe you could go to page 29, in 8.7 on page
12 29 you refer to section 3 and section 3.1. Do you see that?
13 A. Yes, I can see that.
14 Q. And at no point in that paragraph, which is the only paragraph
15 in which you consider section 3.1, do you mention the request
16 channel?
17 A. No, I do not.
18 Q. No. So it is obviously not something that you thought was of
19 any significance when you wrote your first report, or indeed
20 your second report, because you do not mention it there
21 either?
22 A. Generally, when I look at Kwon, I consider the information of
23 section 3 to be more background information and does not lead
24 directly to the patent, which was my main concern with Kwon.
25 Q. Okay. I suggest to you, Mr. Bishop, that the skilled person
2 reading Kwon would not have thought that it disclosed the idea
3 of a dedicated request channel? If the skilled person had
4 thought that, in your opinion, you would have mentioned it,
5 would you not?
6 A. Again, I was not really focusing on section 3. I think that
7 the skilled person, looking to implement Kwon for the purposes
8 of the allocation of resources, as per Figure 1, as opposed to
9 the discussion on the different applications, or the different
10 way that resources could be asked, depending on the service
11 type that was being requested, I do not think that they would
12 have considered that particularly.
353. Moreover, Mr Bishop agreed that there was nothing in the TRs for LTE that suggested anything about a dedicated resource on which to make scheduling requests:
23 Q. No. We looked at the TR documents a moment ago, and there was
24 no suggestion in those documents of the use of a dedicated
25 resource on which to make scheduling requests, was there?
2 A. No, there was no specific mention of that. There was ----
3 Q. There was no mention of that at all, was there?
4 A. There was no mention of that at all, but, again, it was clear
5 that they had not made their final decisions on how to
6 schedule in LTE, and I think that was clear, reading the
7 document. They had some ideas about it, but they had not
8 arrived at a final destination.
354. Even if I assume that the skilled person decides to implement a dedicated request channel on which step (1) messages are sent, there still remains a difference of some significance. It is clear that it is the data in the step (1) message in Kwon which conveys the mobile’s need for uplink resources. However, to exert any squeeze the Defendants still need to demonstrate that it was obvious to the Skilled Person, without using hindsight, to decide to change the indicator from the data in the step (1) message to a pilot signal. Mr Bishop did not suggest that such a change was obvious. Indeed, he went no further than acknowledging that the resource request message (as he put it) was in the data, and that a pilot is sent at the same time. He suggested that the fact that the step (1) message is sent on a dedicated request channel ‘might be seen to imply that all aspects of it are asking for data’, but in my view he only made that suggestion with the benefit of his hindsight knowledge of the arguments on essentiality and where the Defendants needed Kwon to go for any squeeze to be exerted.
355. The point was not put clearly to Mr Anderson, not least because the premise in neither the question nor the answer was established:
Q. If the base station detected the pilot signal, the pilot symbols, of a request message on such a request channel, it would know that it had received a request from that UE?
A: If there was an exclusive purpose for that request channel for only that message type, yes.
356. So, even if I assume the skilled person would implement a request channel dedicated to step (1) messages, the pilot symbols which are in or which accompany the step (1) message are there only for their ordinary purpose of enabling coherent demodulation of the data. They are not transmitted or received for any other purpose. Furthermore, it was not at all clear precisely what message would be sent on this dedicated request channel. The implication was that the mere presence of a signal (i.e. a pilot or reference signal) on this channel would indicate a need for resources. That raises the question of what happens to the data in the Kwon step (1) message: is it transmitted at the same time or not?
357. In my view, not only was this argument shot through with hindsight, there was no or insufficient evidence even to get it off the ground. So this attempted squeeze argument fails.
358. Finally, I should record that the Defendants sought to explain away the absence of any discussion of the ‘request channel’ in either of Mr Bishop’s reports by alleging that this argument only achieved prominence once the Claimants’ reliance on non-coherent demodulation (i.e. the mere existence of the PUCCH format 1x signal indicated the need for uplink resources) became apparent. The Defendants asserted this was raised for the first time only in Mr Anderson’s reply report and was only really explained in the Claimants’ oral opening, but this is incorrect.
359. For these reasons, the validity attack and/or squeezes based on Kwon fail.
360. The Defendants pursue two added matter attacks against EP689. The applicable principles are not in dispute. I was referred to Nokia OYJ v IPCom GmbH & Co KG [2012] EWCA Civ 567 at [46]-[50] per Kitchin LJ as he then was.
361. Both allegations depend entirely on the wording of the claims in EP689 because between the application as filed and EP689, there was no material change in the description - just a single added paragraph [0009] to include a very short acknowledgement of prior art.
362. On this feature, the Defendants say that the new information in EP689 as granted is the concept of transmitting a positive signal indicating that the mobile does not need an uplink resource. They say this new information is in integer B of claim 1 as granted.
363. This point turns on the correct construction of integer B. I have already determined the construction issue against the Defendants (see paragraphs 162-167 above) and this also disposes of the added matter argument.
364. Again, the new information is said to reside in part of integer B - particularly in the words ‘based on’. The Defendants say that the new information is the idea that, rather than the information being conveyed by the pilot symbols, it is instead conveyed by something related to the pilot symbols but which is not the pilot symbols themselves.
366. For these reasons, I reject the two added matter attacks which were pursued.
367. For all the reasons set out above, I find claim 1 of each of EP259 and EP689 is valid and essential to Release 8 of the LTE standard.
[1] The two configurations are referred to as “Normal Cyclic Prefix” (7 symbols per slot) and “Extended Cyclic Prefix” (6 symbols per slot)
[2] This is the cqi-PUCCH-ResourceIndex parameter of TS 36.331.
[3] This is the sr-PUCCH-ResourceIndex parameter of TS 36.331.
[4] Exactly which resource is used for a given sub-frame depends on a ‘base’ index N(1)PUCCH and a dynamic offset nCCE that is determined using attributes of the PDCCH that conveyed the corresponding downlink grant to the UE.
[5] For length 4, Walsh Hadamard sequences are used whilst for length 3, the same DFT sequences as for the DMRS region are used.
[6] These are transmitted at the same power in the ith sub-frame (denoted as “PPUCCH(i)” in section 5.1.2.1 of TS 36.213).
[7] The power or energy of the d(0) symbol is not affected by its phase.
[8] The orthogonal multiplexing scheme means that this effectively removes intra-cell interference, leaving only the (separated) desired signals along with any inter-cell interference and noise.